#### Abstract

In this study, a particle swarm optimization (PSO) algorithm with a negative gradient perturbation and binary tree depth-first strategy (GB-PSO) is proposed. The negative gradient term accelerates particle optimization in the direction of decreasing the objective function value. To calculate the step size of this gradient term more easily, a method based on the ratio was proposed. In addition, a new PSO strategy is also proposed. Each iteration of PSO yields not only the current optimal solution of the group, but also the solution based on the 2-norm maximum. Under the current iteration solution of PSO, these two solutions are the children nodes. In the sense of the binary tree concept, the three solutions constitute the father-son relationship, and the solution generated throughout the entire search process constitutes the binary tree. PSO uses a traceable depth-first strategy to determine the optimal solution. Compared with the linear search strategy adopted by several algorithms, it can fully utilize the useful information obtained during the iterative process, construct a variety of particle swarm search paths, and prevent premature and enhance global optimization. The experimental results show that the algorithm outperforms some state-of-the-art PSO algorithms in terms of search performance.

#### 1. Introduction

The general form of optimization problem is as follows:

This represents the minimum value of the function in set , where represents the decision variable and represents the objective function, and the optimal solution is any feasible solution that minimizes the objective function. Optimization problems are common in industrial production, management science, computer science, and other application fields and disciplines, and more complex optimization problems emerge as these fields and disciplines develop. To solve large-scale complex optimization problems, traditional optimization methods such as Newtons method, the conjugate gradient method, and the trust region method are used. These methods require first- and second-order derivative information of the objective function, involving the calculation of the gradient and Hesse matrix, including matrix inversion [1–10]. Although the traditional method has fast convergence speed, the structure of the objective function in practical engineering applications is complex, and the gradient and Hesse matrices are difficult to calculate or do not exist. These shortcomings limit the scope traditional optimization algorithms of application. Therefore, the search for efficient, universal, and easy-to-calculate optimization algorithms has become one of the primary research topics in related disciplines. Since the 1980s, the theory and methods of swarm intelligence algorithms have been extensively studied, applied, and developed. The swarm intelligence algorithm build stochastic optimization algorithms by simulating the group behavior of natural organisms, such as the ant colony algorithm that simulates ant foraging behavior, particle swarm algorithm that simulates bird foraging behavior, and hybrid frog leaping algorithm that simulates frog foraging. These swarm intelligence algorithms require a low mathematical model of the objective function and can solve nonconvex or nondifferentiable functions. Therefore, they outperform traditional algorithms in solving complex engineering problems [11–15]. Among them, the particle swarm optimization (PSO) algorithm with fewer parameters is simple to implement, and has seen widespread use in image processing, pattern recognition, optimization, and other fields [16–18]. However, similar to other swarm intelligence algorithms, PSO performance declines sharply as the search space dimension increases, and it has some shortcomings, such as slow convergence speed and poor accuracy in late evolution. To solve these problems, researchers have improved the PSO algorithm in terms of parameter selection, multi-algorithm fusion, and algorithm topology, and have achieved several study results. A novel self-adaptive PSO algorithm proposed by Tang et al. [19] was used to alleviate the convergence issue of the basic PSO by fine-tuning the three primary control parameters. Zhang et al. proposed an improved adaptive PSO algorithm [20], which controlled the search direction of particles by dynamically adjusting the sensitive parameter. Vafashoar et al. studied a PSO algorithm based on cellular learning automata and maximum likelihood updating mechanism [21]. To improve the performance of PSO on complex problems, an all dimension-neighborhood-based PSO with randomly selected neighbors learning strategy was proposed in [22]. An integrated PSO algorithm with random-restart hill climbing was presented for the resolution of flexible job shop scheduling problem (FJSP) [23]. Girish proposed a hybrid PSO algorithm in a rolling horizon framework to solve the aircraft landing problem [24]. Feng et al. proposed a new PSO algorithm, which realizes the conditional learning behavior and enables particles to perform natural conditional behavior under unconditional motivation [25]. Zou et al. proposed a multiobjective PSO algorithm based on grid technology and multistrategy. The algorithm combined grid technology to enhance the diversity and convergence of the algorithm, increase the number of particles, and improve the convergence and diversity of the multiobjective PSO algorithm [26]. Fan and Jen proposed enhanced partial search PSO (EPS-PSO), which used the concept of collaborative multiswarm optimization. By designing a special cooperative search strategy, preventing particles from falling into the local optimal solution, and effectively locating the global optimal solution, the convergence and efficiency of PSO algorithm were improved [27], and a hybrid algorithm based on the Nelder Mead (NM) simplex search method and PSO was proposed in [28] for unconstrained optimization. The algorithm improved the performance of the standard PSO algorithm by combining hybrid strategies such that it can converge to the optimal solution faster and more accurately.

In this study, an improved algorithm is proposed from the perspective of fully utilizing PSO historical data to build a traceable nonlinear search evolution strategy. First, a perturbation term for the negative gradient of the objective function is added to the velocity update formula. The term provides a clear direction (negative gradient is the direction of the fastest descent of the function value) when the particle is searching in the local neighborhood to avoid a completely random search, which can speed up the particle searching speed and enhance the optimization efficiency of the whole particle swarm. Second, using queues and binary trees to preserve the historical data obtained during the evolution process, a binary-tree depth-first search strategy is constructed. In this manner, when the particle swarm falls into the precocious region, it can return to the upper binary tree node using a retrospective method and select the other direction, thus escaping from the current premature region and continuing optimization. Simultaneously, when the binary tree is empty, a new solution vector system is constructed by queue, and the evolution continues until the binary tree and queue are empty simultaneously or the number of iterations reaches the preset maximum number of times, and the algorithm ends. It can be observed that the search evolutionary strategy proposed in this study is more flexible and diverse, and the existing common search strategy is only a special case, thus, the performance of this algorithm is better.

The remainder of this paper is organized as follows: the original PSO algorithm is presented in Section 2. In Section 3, the GB-PSO algorithm is proposed. The simulation results are presented in Section 4, and the conclusions are drawn in Section 5.

#### 2. Particle Swarm Optimization

The swarm intelligence algorithm is a method generated by observing the activities of predation, migration, and other activities of animal groups. The PSO algorithm is a population-based intelligent algorithm proposed by Dr. Kennedy and Dr. Eberhart in 1995 [29]. Its principle is simple, iterative, and easy to implement with velocity and displacement formulas, memory function, fewer parameters to adjust, and great advantages in terms of optimization stability and global convergence.

The description of the standard PSO algorithm is as follows: let the dimension of the search space be . Then, the individual size of the particle group be , the current position of the -th particle is , the current velocity is and current optimal position is . For the entire particle swarm, a global optimal solution is found. In each iteration, the particle’s velocity and position update formula are as follows:where denotes the inertia weight, which is used to balance the global search and local search ability of the algorithm; and are individual cognitive factors and social cognitive factors, respectively; and and are random numbers in the range [0,1].

#### 3. The Proposed Algorithm

The PSO has the advantages of simple model, fewer parameters, and easy implementation. However, in the process of optimizing some complex functions, it is easy to have a slow convergence speed, low optimization accuracy, and prematurity. The main reason is that particles usually move around the global and local optimum locations, however do not fully explore the entire search space. Considering the shortcomings of the standard PSO algorithm, we propose the following two improvements:

##### 3.1. Gradient Perturbation

In this section, a perturbation algorithm based on the objective function negative gradient is proposed, which can enhance the optimization ability of each particle in the local neighborhood. It is well known that the gradient algorithm in the analytic domain is an efficient, simple, and easy-to-program optimization algorithm. The improved algorithm adds a negative gradient item to equation (2) to achieve the fastest descending direction of the objective function in each iteration of the particle, increasing the search targeting and accelerating the evolution. To a certain extent, the optimal value is within the neighborhood of the steepest descent direction of the objective function.

The difference between equations (2) and (4) is shown in Figure 1:

It is clear that equation (4) contains information regarding the decline of function values, which assists particles to find optimal values. Considering that the objective function is not necessarily steerable in the search domain, we use the following approximate method to calculate its gradient.

The step size in equation (4) can be calculated using the Wolfes rule, that is satisfies the following inequalities [30]:where . and are given constants. However, this method requires a large amount of calculation and is time-consuming. To reduce the calculation and improve the efficiency, we propose a method in this study. First, the ratio is introduced as follows:where . It is clear that the value of represents the similarity between functions and , and when , . In the process of iteratively calculating the value of , when ( is a preset threshold), it shows that is very similar to , can be accepted, and the value of function decreases along the direction of (); otherwise, the value of is reduced and the value of is recalculated until the above conditions are satisfied. The specific steps of the algorithm are as follows: **Step 0**. Given points , and the constant . Let . **Step 1**. Because , can be initialized to a large positive number, and then proceed **Step 2**. **Step 2**. If , the computation is stopped and output is obtained. Otherwise, let and loop with **Step 2**.

##### 3.2. Binary Tree Depth-First Search Strategy

Standard PSO and many improved algorithms use linear search for global optimization. This search strategy is based on the optimal solution vector obtained by the current PSO to indicate the direction of the next evolution. Its advantages are that it is simple in form and easy to program, however it has two shortcomings: (1) when the optimal group solution tends to be locally optimal, under the action of equation (3), the particle group enters the neighborhood of the local optimal solution. Because this linear search is unidirectional, the entire particle swarm will converge to the local optimal solution even if it is searched for a long time in the neighborhood and lacks the ability to jump out of the precocious trap and perform global optimization. (2) In the entire process of particle swarm evolution, only the optimal solution of the current swarm is preserved, and not the information of various solutions in the process of evolution, which can be used to construct more diverse evolutionary strategies. This method of preserving the current global optimal solution information and using it to build the particle swarm evolution path is similar to that in [27]. Based on this, the data structure of the queue and the binary tree are adopted to save the current optimal, suboptimal, and norm maximum solutions, and define the corresponding operations to construct a traceable and multidirectional search strategy. The steps of the algorithm are as follows: Step 0: Random generation of initial solution vector for each particle in search space; initializing the optimal position vector , the velocity vector , and the current optimal solution vector , let , , (i.e., is a suboptimal vector); generate queue , push group solution vector and suboptimal vector into the queue; generate a binary tree , set the data field of the root node to null, and use as the data field of the left child node, where (that is, the value of function varies the most in the neighborhood centered on , which is far from the premature zone), is the data field of the right child node, as shown in Figure 2: Step 1: Particle swarms evolved as follows:(1)Each time the right side child direction depth search is preferred, the particle speed and position values are updated by the following formula: If the particle swarm evolves successfully, that is, , compute , and . Insert and into the corresponding nodes of the binary tree (Figure 3(a)), and push into the queue as shown in Figure 3(b). If the PSO is always evolutionary, the maximum number of iterations is calculated based on to the above method, and the optimal value is obtained. If there is no evolutionary situation, the algorithm proceeds **Step 2**. Step 2: When the particle swarm cannot evolve, proceed as follows: Considering the current solution as the framework, and equation (8) as an iterative formula, use the general evolutionary method, attempt to search forward several times, and then detect whether it evolves. If the particle swarm can evolve, continue to evolve forward, as shown in Figure 4(a). If the particle swarm is still unable to evolve, it can be considered that the particle swarm is trapped in the field of local optimal value, then the search direction is traced back to the upper binary tree node, and the left child node is considered as a new search starting point, take out its solution information: , and continue to evolve according to the method shown in **Step 1**, as shown in Figure 4(b). In the process of searching, if the particle swarm cannot evolve for a long time, it will exceed the root node of the binary tree after multiple backtracking (because in programming implementation, the stack is used to store the data of the binary tree node and to implement the above algorithm by pressing and popping operations, thus when this happens, the stack is empty), as shown in Figure 4(c). The solution vector system based on the optimal solution vector does not continue to evolve. The algorithm proceeds to **step 3**, and the system is reconstructed with the suboptimal solution . Step 3: Consider the current element based on the direction of the queue head pointer, and move the pointer to the next element, as shown in Figure 5.

**(a)**

**(b)**

**(a)**

**(b)**

**(c)**

Complete the following assignments:

The above steps are followed to search for the optimization again. When and are empty or the number of iterations reaches a preset value, the algorithm completes and outputs the optimal solution obtained during the entire iteration process.

From the above steps, linear search is only a special case of this algorithm, that is, the situation shown in Figure 3(a). However, in actual calculations, particle swarms cannot always evolve, particularly when optimizing complex functions, and the entire particle swarm enters precocious areas at some stage of evolution, in which case, a linear search lacks the ability to jump out of precocious traps. By contrast, the proposed algorithm uses a variety of solution information to construct a traceable nonlinear search algorithm, thus having a more flexible search strategy and more powerful global optimization capabilities.

#### 4. Simulation Results

To verify the performance of GB-PSO, 22 numerical functions, which are widely adopted in the performance comparison of global optimization algorithms (some test functions can be observed in [28]) were selected in this study, and the performance of the algorithm was reflected by solving their minimum values. The definitions of the benchmark functions and their detailed information are listed in Tables 1 and 2, respectively. Functions were divided into three groups based on their physical properties and shapes.

The first group contains eight unimodal benchmark functions. From their mathematical properties and geometric distribution, these functions are single-mode, that is, there is only one global optimal value and the optimal solution is the center of the search interval. Therefore, the convergence rate of the search algorithm is important in determining global optimum, while the second group includes 11 multi modal benchmark functions. The characteristic of these functions is that the number of local minimums increases with an increase in the dimension of the variable, which makes it easy for the algorithm to fall into the premature zone when searching for the global optimal solution, and makes it more difficult for the algorithm to converge to the global optimal value, while the rest are rotated and shifted functions. For shifted functions, the global optimal vector is no longer the center of the search interval, but shifted to the position vector . In the simulation experiment, is randomly generated within the range of the search interval, that is, each component satisfied: . Rotating the function did not change the shape, however it increased the difficulty of the algorithm in identifying it. The rotation matrix was randomly generated, each element satisfied the standard normally distributed, and was then orthogonalized by the Gram-Schmidt method.

In the experiment, the initialization of all parameters is randomly generated based on its logical meaning, all experiments were performed in a Windows XP Professional environment using an Intel Core Dual Core system with 2.11 GHz and 2.0 GB RAM, and the codes were implemented in C compiled Language.

##### 4.1. Testing Performance of the Proposed Algorithm

In this subsection, we verify the effectiveness of the proposed GB-PSO algorithm. The maximum number of iteration was set to 1500 for all functions, and the particle number was considered as 120 with 20-dimension. Each experiment was independently repeated 50 times for every function. The mean of the best values and the best standard deviations of the 22 benchmark functions obtained by the GB-PSO algorithm are shown in Table 3. Because of the limited space, we can only list the variation curve of the optimal value of some functions in Figure 6.

**(a)**

**(b)**

**(c)**

**(d)**

**(e)**

**(f)**

These results indicate that the GB-PSO algorithm is effective. The algorithm is divided into three stages. As an example, the function variation curve is shown in Figure 7. In the first stage, the two curves coincide highly, which shows that the algorithm converges quickly and the PSO approaches the global optimum quickly. In the second stage, the particle swarm is trapped in the premature region, which further shows that the negative-gradient deflection term has played its due role. Because the algorithm adopts the binary tree depth-first search strategy, the optimal value of the current particle swarm is promoted to a high frequency and large jump. This transition eventually causes the particle swarm to escape from the region, continue to evolve, and become further closer to the global optimal value. In the third stage, because the particle swarm is already close to the global optimal value, although the particle swarm still attempts to evolve further, it fails, and the algorithm converges to its ultimate optimal value. After several experiments, the performance of the algorithm was observed to be related to the parameter values. The convergence performance of the algorithm is different under different values; therefore, we will study the parameter selection strategy to further improve the performance of the algorithm.

##### 4.2. Comparison of GB-PSO with Other Algorithms

In this simulation, to further test and verify the performance of the GB-PSO algorithm, we compared it with four state-of-the-art PSO algorithm: the novel self-adaptive PSO algorithm (SAPSO) [20], the hybrid particle swarm optimizer with sine cosine acceleration coefficients (H-PSO-SCAC) [31], the improved global-best-guided particle swarm optimization with learning operation (IGPSO) [32] and the competitive mechanism based multiobjective PSO (CMOPSO) [33]. For all test algorithms, the maximum number of iterations was set to 2000 for all functions with 30-dimension, the particle number was considered as 120, and each experiment was independently repeated 20 times for every function. The results are listed in Table 4, and Figure 8 shows the convergence graphs in terms of the best mean fitness value of each algorithm for the benchmark function.

**(a)**

**(b)**

**(c)**

**(d)**

**(e)**

**(f)**

From the results in Table 4, for the eight unimodal functions, except for , the proposed GB-PSO algorithm performs better than the SAPSO, H-PSO-SCAC, IGPSO, and CMOPSO algorithms on function . The CMOPSO exhibited the best performance. However, in its entirety, the proposed GB-PSO exhibits strong computational power, and its optimization results are very close to the theoretical global optimum; for four multi modal functions, the GB-PSO algorithm performs best and achieves the global optimal solutions or is close to the global optimal solution on two functions ( and ). However, on two multi modal functions ( and ), the proposed algorithm could not determine the global optimal values because the choice of parameters would greatly affect its performance, as described above; for 10 rotated and shifted functions, because of the complexity of these functions, not all the algorithms can determine the global optimal value. These values were within the acceptable range. Comparatively speaking, the GB-PSO algorithm is better than other algorithms in 10 test functions except for and . From the simulation results, this performance difference is evident.

Figures 8(a)–8(f) show the convergence curves obtained by the five algorithms for six functions (, , , , , and ). Figure 8(a) clearly shows that GB-PSO has faster convergence speed and more powerful optimization ability than that in the other four methods, although it falls into local optimum during evolution, however after less iteration calculation, the particle swarm can finally escape from the local optimum and converge to a higher precision solution. From figure 8(b), it can be observed that GB-PSO has the best performance for this benchmark function. Figure 8(c) shows that the GB-PSO algorithm surpasses all other approaches, and the algorithm identified the global optimal solution for approximately . Figure 8(d) shows that the GB-PSO algorithm shows the fastest convergence speed initially. From figure 8(e), in terms of search accuracy, the GB-PSO algorithm is slightly inferior to SAPSO however it is stronger in terms of algorithm stability. Figure 8(f) illustrates the values achieved for the four algorithms when optimization and the figure shows that GB-PSO owns strong convergence speed and stability than SAPSO, H-PSO-SCAC, IGPSO, and CMOPSO.

In summary, as shown in Table 4 and Figure 8, the GB-PSO algorithm achieves a better search performance and a stronger ability to escape from local optimum solutions than SAPSO, H-PSO-SCAC, IGPSO, and CMOPSO. Thus, the GB-PSO method is highly efficient for solving numerical optimization problems.

#### 5. Conclusions

The GB-PSO algorithm was proposed to enhance the search performance of the PSO algorithm and obtain a good global optimal solution. PSO is combined with a gradient descent algorithm and binary tree depth-first search strategy. First, the negative gradient perturbation term is introduced into the particles velocity and position update formulas. On the microlevel, the deflection term can enhance the local optimization ability of each particle, thus accelerating the optimization speed of the particle swarm as a whole. On the other hand, based on the binary tree depth-first search strategy, it can fully use the various solution information in the optimization process to construct a multidirectional and retrospective optimization mechanism to avoid the premature shortcomings of single-directional optimization. This increases the global optimization ability of the algorithm and improves the calculation accuracy.

#### Data Availability

The data used to support the findings of this study are included within the article.

#### Conflicts of Interest

The authors declare that they have no conflicts of interest.

#### Acknowledgments

This work was supported by the National Natural Science Foundation of China (NSFC) under Grants 61561041; Characteristic Innovation Projects of Guangdong Universities in 2021 under Grants 2021KTSCX121; Shaoguan Science and Technology Planning Project in 2021 under Grants 210808234530883; and 2022 Shaoguan Social Development Science and Technology Collaborative Innovation System Construction Project under Grants 220607114531052.