#### Abstract

Particle swarm optimization (PSO) algorithm is widely used due to its fewer control parameters and fast convergence speed. However, as its learning strategy is only learning from the global optimal particle, the algorithm has the problem of low accuracy and easily falling into local optimization. In order to overcome this defect, a multipopulation particle swarm optimization algorithm with neighborhood learning (MPNLPSO) is proposed in this article. In MPNLPSO, a small-world network neighborhood learning strategy is proposed to make particles learn from the neighborhood optimal particles instead of only the global optimal particle. Furthermore, the concept of multipopulation cooperation is introduced to balance the ability of global exploration and local exploration. In addition, a dynamic opposition-based learning strategy is proposed to effectively activate the particles in the search stagnation state. Moreover, in order to improve the accuracy of the algorithm and, to some extent, avoid the population diversity decreases too fast, as the searching process continues, Lévy flight is introduced to randomly perturb the particles of historical optimal and neighborhood optimal. To verify the performance of the proposed algorithm experimentally, twenty benchmark functions are solved. Experimental results show that the proposed multipopulation particle swarm optimization algorithm with neighborhood learning presents high efficiency and performance with a certain robustness.

#### 1. Introduction

Recent years have witnessed the emergence of many metaheuristics such as the monarch butterfly optimization (MBO) [1], slime mould algorithm (SMA) [2], moth search algorithm (MSA) [3], and colony predation algorithm (CPA) [4]. MBO [1] was firstly proposed by Wang in 2019, which simplifies and idealizes the migration behavior of eastern North American monarch population by updating the positions of the monarch butterflies in two ways. Li et al. [2] utilized the oscillation mode of slime mould in nature with adaptive weights to simulate the process of producing positive and negative feedback of the propagation wave of slime mould based on bio-oscillator to form the optimal path for connecting food with excellent exploratory ability and exploitation propensity and proposed SMA. Inspired by the phototaxis and Lévy flights of the moths, MSA was developed by Wang [3]. Natural moths are family insects, and in MS method, the best moth individual is viewed as the light source. Some moths that are close to the fittest one always display an inclination to fly around their own positions in the form of Lévy flights. On the contrary, due to phototaxis, the moths that are comparatively far from the fittest one will tend to fly toward the best one directly in a big step. These two features correspond to the processes of exploitation and exploration of any metaheuristic optimization method. And based on the corporate predation of animals in nature, CPA was proposed by Tu et al. [4], utilizing a mathematical mapping following the strategies and introducing new features of a unique mathematical model that uses a success rate to adjust the strategy and simulate hunting animals’ selective abandonment behavior.

As a traditional optimization algorithm, particle swarm optimization (PSO), a type of evolutionary algorithm, based on swarm intelligence, which is originated from the simulation study of swarm intelligence behaviors of natural organisms in the evolution process, for example, the migration of birds [5] and the foraging of ant colonies [6], was firstly proposed by Kennedy and Eberhart [7] and Shi and Eberhart [8] in 1995 for the optimization of continuous problems. In this type of algorithm, the groups can effectively identify the characteristics of the problem by sharing evolutionary information and considering the environmental factors of individual’s surroundings, and finally guide the group to search toward more promising areas. It established an evolutionary search model based on the intelligent behavior of swarms by studying the foraging behavior of the flock of birds and fish. In this model, each individual in the group can constantly move toward the position of the best individual in the group, which is affected by the interaction of its own and the best historical experience of any individual in the neighborhood, and thereby finds the global optimum of the problem. Since the algorithm was proposed, it has received a lot of research due to its advantages such as high efficiency and stability in solving complex problems and not being restricted by the characteristics of the problem. However, although most modified particle swarm optimization algorithm variants can satisfy the requirements for optimization problems, as the scale of optimization problems continues to increase, the existing PSO algorithm and its variants become more and more difficult to meet the performance requirements. Studies have shown that most of the improvements of PSO algorithm usually only pay attention to modify its global search ability and ignore its local search ability, which brings about problems such as insufficient population diversity and premature convergence when dealing with complex optimization problems [9]. In addition, limited by the relatively simple evolution strategy, which is that each particle only exchanges information with the best individual in its neighborhood and therefore ignores the important influence of other individuals on the evolution process of the entire group, the PSO algorithm usually has troubles in solving complex problems. Therefore, many researchers have conducted a lot of work on PSO algorithm from theoretical analysis to practical application, for example, the adaptive research of algorithm control parameters [10], the design of population topology [11], and the hybrid design with other evolutionary algorithms [12]. The PSO algorithm has the advantage of fewer control parameters that need to be adjusted, but these control parameters have a significant impact on the performance of the algorithm, for example, inertia weight and acceleration factor. Eltamaly [13] proposed a novel strategy that can determine the optimal values of control parameters of a PSO. The newly proposed strategy uses two nested PSO (NESTPSO) searching loops: the inner one contained the original objective function, and the outer one used the inner PSO as a fitness function. The control parameters and the swarm size acted as the optimization variables for the outer loop. These variables were optimized for the lowest premature convergence rate, the lowest number of iterations, and the lowest swarm size. Wei-Min et al. [14] proposed a more simple strategy of PSO algorithm called *θ*-PSO. In *θ*-PSO, an increment of phase angle vector replaces the increment of the velocity vector and the positions are decided by the mapping of phase angles. The results of benchmark testing of nonlinear functions show that the performance of *θ*-PSO is much more effective than that of the standard PSO. Tian et al. [15] proposed a chaotic PSO algorithm based on the sigmoid acceleration coefficient. A uniformly distributed initial particle population is generated through logical mapping rules, the sigmoid function is used to adaptively adjust the acceleration coefficient, and the particles are updated based on the chaotic reinitialization and Gaussian mutation strategy to maintain the diversity of the population, so that the algorithm can continue to explore the potential search area. Liu et al. [16] proposed an AWPSO algorithm based on adaptive weights of the sigmoid function. In this algorithm, considering the distance between each particle and the global optimal position, an adaptive acceleration coefficient update strategy based on the sigmoid function is designed to improve the convergence performance of the algorithm.

Generally, PSO algorithms only utilize the best historical experience of individuals and groups to guide the evolution process of the population. This strategy is simple and effective, but it is difficult to obtain high-quality results when dealing with complex problems. Therefore, the design of efficient evolution strategies has aroused extensive research. An adaptive PSO algorithm using scale-free network topology is proposed by Li et al. [17]. Based on the characteristics of scale-free network topology with power-law distribution, this novel algorithm can construct a corresponding neighborhood for each particle. Then, it selects the elite particles from the community to participate in the particle evolution process and considers full play to the guiding role of elite particles within the population search process. In addition, a new adaptive weight strategy and an introduction to the differential evolution operation for achieving a balance ability to the global and local exploration within the search process are proposed. As for the population topology, Chen et al. [18] proposed a hybrid ring topology in the early evolutionary process of evolution, a sparse topology is constructed to enhance the population diversity to help locate multiple optima, while in the later stage, the population communication topology is switched to a relatively dense topology for improving the convergence efficiency on the found optima, and the threshold that controls the switch of the topology and its effect is also analyzed in this study. In multimodal optimization problems, niching parameters are often used to inform the algorithm how far apart between two closest optima or the number of optima in the search space, and are typically difficult to set as they are problem dependent. Li [19] proposed a particle swarm optimization algorithm using a ring neighborhood topology without any niching parameters, and experimental results suggest that PSO algorithms using the ring topology are able to provide superior and more consistent performance over some existing PSO niching algorithms that require niching parameters. Qin et al. [20] presented an improved PSO algorithm with an interswarm interactive learning strategy (IILPSO) by overcoming the drawbacks of the canonical PSO algorithm’s learning strategy. IILPSO is inspired by the phenomenon in human society that the interactive learning behavior takes place among different groups. Particles in IILPSO are divided into two swarms. The interswarm interactive learning (IIL) behavior is triggered when the best particle’s fitness value of both the swarms does not improve for a certain number of iterations. According to the best particle’s fitness value of each swarm, the softmax method and roulette method are used to determine the roles of the two swarms, such as the learning swarm and the learned swarm. In addition, the velocity mutation operator and global best vibration strategy are used to improve the algorithm’s global search capability. The IIL strategy is applied to PSO with global star and local ring structures that are termed as IILPSO-G and IILPSO-L algorithm, respectively. Xu et al. [21] proposed a dimensional learning strategy (DLS) for discovering and integrating the promising information of the population best solution according to the personal best experience of each particle. Thereafter, a two-swarm learning PSO (TSLPSO) algorithm based on different learning strategies is proposed. One of the subpopulations constructs the learning exemplars by DLS to guide the local search of the particles, and the other subpopulation constructs the learning exemplars by the comprehensive learning strategy to guide the global search.

Moreover, a multi-objective approach to optimization has been attempted. Zhang et al. [22] proposed a modified particle swarm optimization (AMPSO) to solve the multimodal multi-objective problems. Firstly, a dynamic neighborhood-based learning strategy is introduced to replace the global learning strategy, which enhances the diversity of the population. Meanwhile, to enhance the performance of PSO, the offering competition mechanism is utilized. An improved version of the directed weighted complex network particle swarm optimization using the genetic algorithm (GDWCN-PSO) is presented by Bharti et al. [23]. This method uses the concept of the genetic algorithm after each update to enhance convergence and diversity.

Furthermore, the PSO algorithm and its variants have been applied to solve many realistic problems. Bharti et al. [23] have applied GDWCN-PSO to solve the optimal key-based medical image encryption. It is one of the most challenging problems in health IoTs for protecting sensitive and confidential patient data as well as addressing the major concern of integrity and security of data in today’s advanced digital world. In addition, constrained optimization problems, compared to unconstrained problems, are more complex problems due to their multiple constraints with different requirements. In order to solve this kind of problem, Parsopoulos and Vrahatis [24] first proposed the idea of penalty function, turning the constraint problem into an unconstrained problem. However, in this way, an appropriate penalty function factor needs to be selected; otherwise, the efficiency of the algorithm will be significantly affected.

Although the above-mentioned details have introduced some modifications on the PSO algorithm, there is little research work analyzing the diversity of network topology. Thus, a multipopulation particle swarm optimization algorithm with neighborhood learning (MPNLPSO) is proposed. The main contributions of this study can be summarized as follows:(1)The concept of population cooperation is introduced. In the iterative process, the particle swarm is divided into two populations according to the fitness value: elite population and shoddy population.(2)Small-world network neighborhood learning strategy is proposed. A neighborhood for each particle is constructed to make them learn from the neighborhood optimal particles, thus getting rid of the situation of only learning from the global optimal particle.(3)Dynamic opposition-based learning strategy is proposed. It mainly aims at activating the particles trapped in search stagnation in the shoddy population.

The rest of this article is arranged as follows: Section 2 briefly reviews the PSO algorithm and some details of the small-world network, opposition-based learning, Lévy flight, respectively, additionally, the concept of diversity analysis of particle swarm is introduced in particular. The algorithm we proposed is introduced in Section 3, which firstly gives details about two strategies: small-world network neighborhood learning strategy and dynamic opposition-based learning strategy, and then presents the framework of the proposed algorithm MPNLPSO. Section 4 presents the experiment results of MPNLPSO. Firstly, benchmark functions used in the experiment and the parameter settings of the comparison algorithms are introduced. Secondly, the diversity of the VN small-world network and NW small-world network is analyzed and the solution accuracy of the proposed algorithm is verified. Thirdly, the effectiveness of the strategy is analyzed. Lastly, the proposed algorithm is compared from the perspective of convergence.

#### 2. Related Work

##### 2.1. Particle Swarm Optimization

In the standard particle swarm optimization algorithm, each particle represents a potential solution to the optimization problem and is defined by two vectors, velocity and position . By initialization, particles are generated in a three-dimensional search space with random velocity and position values. During evolution, each particle updates its velocity and position according to the following learning strategies:where is the velocity of the *j*-dimensional component of particle *i* in generation *t* + 1, and represents the position of the *j*-dimensional component of particle *i* in generation *t* + 1. *c*_{1} and *c*_{2} are acceleration coefficients, and *r*_{1} and *r*_{2} are random numbers of [0, 1]. In order to control the influence of the particle’s previous velocity on the current velocity, an inertia weight is usually added to the velocity in Equation (1), which is shown in Equation (3) as follows:

##### 2.2. Small-World Network

Watts and Strogtz [25] rewired the conventional network and introduced disordered lines. The resulting network has the characteristics of high agglomeration coefficient and low average path length. This network model is generally called the small-world network model. There are many forms of small-world network, among which WS small-world network and NW small-world network are widely used, and both are constructed on the basis of a ring network. WS small-world network is generated by the process of fixing a vertex in a ring network, then randomly selecting a node from the remaining nodes to reconnect with it, NW small-world network is constructed by adding edges between nodes randomly with connection probability *p*, and both processes must ensure that there are no duplicate edges in the network. The connection probability *p* is the probability of reconnection of each edge. means that each edge is not reconnected, that is, it is still a ring network. means that each edge is reconnected, that is, it becomes a composite network of random network and ring network. = (0, 1) indicates that some edges have been reconnected, that is, the NW small-world network.

The detail of the NW small-world network will be introduced due to its application in the proposed algorithm. The left side of Figure 1 shows a ring topology, where each node is connected with two points adjacent to itself on the left and right side respectively, and the connection probability is 0. The middle of Figure 1 shows the small-world network when the connection probability of the edge is 0.2. When the connection probability of the edge is 1, the small-world network will evolve into a combination of regular network and random network, which is shown on the right side of Figure 1.

In addition to building a small-world network based on the regular network above, Kleinberg [26] constructs small-world networks in another way. Rather than using a ring as the basic structure, we begin from a two-dimensional grid and make edges directed.

Figure 2 shows the Von Neumann network with swarms of size 5 × 5 and each particle has up, down, left, and right particles in its neighborhood, except the edge particles. Figure 3 shows that each node in the Von Neumann network is connected with its neighborhood, and in addition, randomly connected with two nodes, for instance, particle *i* is connected to two randomly selected particles *j* and *k.* Thus, a VN small-world network is generated.

##### 2.3. Opposition-Based Learning

The opposition-based learning strategy [27] is a new concept that has emerged in the field of computational intelligence in recent years, and has been proven to be an effective concept to various optimization approaches [28–33]. When evaluating a solution to a given problem, simultaneously computing its opposite solution will provide another chance for finding a candidate solution that is possibly closer to the global optimum.

Opposite number [29]: let be a real number. The opposite of is defined as follows:

Similarly, the definition is generalized to higher dimensions as follows:

Opposite point [29]: let be a point in a *D*-dimensional space, where and , . The opposite point is defined as follows:

By applying the definition of opposite point, the opposition-based optimization can be defined as follows:

Opposition-based optimization [29]: let be a point in a *D*-dimensional space (i.e., a candidate solution). Assume is a fitness function, which is used to evaluate the candidate’s fitness. According to the definition of the opposite point, is the opposite of . If is better than , then update with ; otherwise keep the current point . Hence, the current point and its opposite point are evaluated simultaneously in order to continue with the fitter one.

Particularly, in the particle swarm optimization algorithm, Equation (6) is used to calculate the opposition position of the particle.where represent the upper and lower bounds of particles, that is, the function definition domain.

##### 2.4. Lévy Flight

Lévy flight [34] is a class of non-Gaussian random processes whose random walks are drawn from Lévy stable distribution. The generation of random numbers with Lévy flight consists of two steps: the choice of a random direction and the generation of steps that obey the chosen levy distribution. Random walks are drawn from Lévy stable distribution. Mantegna proposed a method to solve random numbers with normal distribution in 1994, that is, Mantegna method. The method of generating random walk obeying Lévy distribution by Mantegna method is as follows:where *u* and obey normal distribution as follows:

The step size of random walk is calculated by the following formula:where 0.01 is the scale factor, which comes from *L*/100, and the stride *L* is the classical length scale. If step *S* is not modified, Lévy flight may jump out of the set area because the step is too large.

##### 2.5. Diversity Analysis of Particle Swarm

By judging the difficulty of the problem from the fitness distance correlation, it can be concluded that when the problem presents simple characteristics, it shows that particle swarm optimization needs to enhance local exploitation ability, and when the problem presents difficult characteristics, it shows that particle swarm optimization needs to enhance global exploration ability. Therefore, it is necessary to judge the impact of two complex networks in the process of particle swarm optimization. Taking advantage of the characteristics of complex network topology can effectively enhance the robustness of particle swarm optimization algorithm.

There are many ways to evaluate population diversity, such as population diameter, population radius, average distance from population center to surrounding particles, standard average distance from population center to surrounding particles, and population consistency. In the process of particle swarm search, particle swarm will show different characteristics. In order to intuitively reflect the diversity of scale-free network and small-world network in the process of particle swarm search, this study selects the average distance from the population center to the surrounding particles to describe the diversity of particle swarm, that is, this study selects *Gbest* as the population center and calculates the average distance from other particles to *Gbest*. The calculation equation is shown as follows:where *N* is the particle swarm size and *D* is the dimension.

The above calculation method is chosen to describe the diversity of particle swarm so as to evaluate the diversity of particle swarm after every iteration, more specifically, it is more objective to take *Gbest* as the central position of particle swarm after different networks take effect on particle swarm, which can fully reflect the impact of different networks on particle swarm. If the result is small, it means that the network presents good convergence in the particle swarm optimization algorithm. Or, it means that the network has good diversity.

#### 3. Proposed Algorithm

This section will introduce the proposed algorithm MPNLPSO in detail. Firstly, Section 3.1 introduces the neighborhood learning strategy of the small-world network. By building a small-world network to construct a neighborhood for each particle, it can make full use of the experience gained by the optimal particle within its neighborhood, thus getting rid of the situation of learning only from the global optimal particle. The dynamic opposition-based learning strategy is introduced in Section 3.2, which aims at reactivating particles in the search stagnation in the shoddy population.

##### 3.1. Small-World Network Neighborhood Learning Strategy

As we all know, heuristic optimization algorithms based on multipopulation have always attracted researchers’ interests. In order to improve the search efficiency of particle swarm optimization, the concept of population cooperation is introduced. The swarm is divided into elite population and shoddy population, and different learning strategies are adopted according to the characteristics of the two populations. The partition rule of particle swarm is as follows: in each iteration, the particle swarm is sorted according to the fitness value. Because all the problems in this study are minimum problems, the fitness values are sorted from small to large. The top 50% of the particles are defined as elite particle swarm, and the last 50% are shoddy population.

In this study, the dispersion analysis method introduced in Section 2.5 is used to calculate the dispersion of NW small-world network and VN small-world network. It is concluded that the dispersion of VN small-world network is greater than that of NW small-world network. The specific experimental results will be introduced in Section 4.

For the elite population, its ability of local search should be enhanced due to the location being more promising. Therefore, this study uses NW small-world network to build a neighborhood for each particle that belongs to the elite population. The specific steps are as follows:(1)Build a ring-shaped regular network with particles of 10% of the population size before and after, respectively, for each particle.(2)Set the probability to 0.5 and randomly edge the regular network.(3)After every particle in the population is traversed, the neighborhood topological space of small-world network is formed.

For the shoddy population, its global exploration ability should be enhanced because its location is far from the actual optimal value compared with the elite population. Therefore, this study uses VN small-world network to build a neighborhood for each particle. The specific steps are as follows:(1)The particles in the inferior population are arranged into a 7 × 7 Von Neumann network according to the sequence number.(2)After the von Neumann network is constructed, for each particle, in addition to its upper, lower, left, and right particles, two other particles are randomly selected to connect.

In order to ensure that the final network can present the characteristics of small-world network, the edges connected between particles are set as two-way edges, and the particles do not consider whether they have been previously connected when randomly selecting the target particles.

Figure 4 shows a simple example. Particle *i* is connected with *j* and *k*. The randomly selected target particle *j* has been connected with *i*, but it is still one of the two particles randomly selected by *i*. Only in this way can the final network show the characteristics of small-world network.

After constructing the small-world network, each particle learns from its neighborhood optimal particle *Nbest*, so as to get rid of the situation that particle only learns from the global optimal particle. The speed update formula of small-world network neighborhood learning strategy is shown as follows:

Even if a certain learning strategy is adopted, particle swarm optimization may fall into local optimization due to the uncertainty in the search process. In particular, the particles of historical optimal position and neighborhood optimal position are more important, as other particles will learn from them. This study takes advantages of the characteristics of Lévy flight to randomly perturb *Pbest* and *Nbest* to reduce the risk of falling into the local optimum. The improved velocity update equation is as follows:

##### 3.2. Dynamic Opposition-Based Learning Strategy

Based on the characteristics of shoddy population, we know that shoddy population has a higher probability of falling into search stagnation than elite population. Therefore, a dynamic opposition-based learning strategy is proposed. The basic idea of this strategy is setting a parameter *stag* with an initial value of 0. When the current solution is not significantly improved compared with the previous one, *stag* will increase by 1. When the solution searched by the population is better than the previous solution, *stag* will be reset to 0. As the searching process continues, the value of *stag* will keep or increase. In this study, the upper bound of *stag* is set to 5. When the upper bound is reached, the opposition-based learning strategy will be adopted for the particle. After the solution based on opposition-based learning strategy is generated, it needs not to compare with the original solution, due to the obvious reason that the original solution has already made the particle trapped into search stagnation state, that is, the purpose of dynamic opposition-based learning is to increase the diversity of the particle and activate its search state, not just to find a better solution.

The pseudocode of dynamic opposition-based learning strategy is expressed as follows:(1)Input: shoddy population *N*_{2}, temporary variables *temp* of last global optimal particle, current global optimal particle *Gbest*;(2)Output: *x*_{2};(3)stag = 0;(4)For *i* = 1 : *N*_{2}(5) If *stag*>5(6) *x*_{2} = *x*_{2} + ;(7) *x*_{2} = *U* + *L* − *x*_{2};(8) Else(9) *x*_{2} = *x*_{2} + ;(10) End if(11)End for(12)Evaluate particles and update *Gbest*;(13)If fitness(*Gbest*)<fitness(*temp*)(14) fitness(*temp*) = fitness(*Gbest*);(15) *stag* = 0;(16)Else(17) *stag* = *stag* + 1;(18)End if

##### 3.3. The Framework of the MPNLPSO Algorithm

According to the above learning strategy, a multipopulation particle swarm optimization algorithm with neighborhood learning strategy is proposed. After initializing and evaluating, the algorithm enters the iterative process. The particle swarm is sorted according to the fitness value. The top 50% of the particles is considered as the elite population, and the last 50% is the shoddy population. For elite population and shoddy population, NW small-world network and VN small-world network are constructed respectively, so that each particle can learn from the optimal particle in its neighborhood. For the shoddy population, the dynamic opposition-based learning strategy is additionally used to activate the particles trapped in the stagnant search. Finally, the elite population and shoddy population are combined to reevaluate the particles and update the historical optimal particles *Pbest* and global optimal particle *Gbest*.

Inspired by Li et al. [17], this study constructs a neighborhood for each particle, on the other hand, divides the particle swarm into elite population and shoddy population. It is considered that as the search process progresses, it is more reasonable to give each particle different inertia weights than to reduce the inertia weights linearly. As when the updated position of the particle is better than the average position of the particle swarm, the influence of the previous particle velocity should be reduced, which will enhance its local exploitation ability. And when the updated position of the particle is worse than the average position of the particle swarm, the influence of the previous particle velocity should be increased to enhance its ability of global exploration. Therefore, the inertia weight equation proposed in Li et al. [17] is used for calculation in this study.where *ω*_{max} = 1.2, *ω*_{min} = 0.2, and is the historical optimal fitness of the *i*th particle in generation *t*. is the minimum fitness value in generation *t*. is the average fitness value in generation *t*.

The pseudocode of the MPNLPSO algorithm in this study can be expressed as follows:(1)Initialization: *N* = 100, *D* = 30, 50;(2)Evaluate the particle swarm and get *Pbest* and *Gbest*;(3)While *Fes* < *maxFEs* do(4) Calculate the inertia weight of each particle using equation (13);(5) Generate a set of random numbers obeying Lévy distribution;(6) Sort particle swarm by fitness in ascending order;(7) Constructing elite population *N*_{1} using the top 50% of particles;(8) Constructing NW small-world network;(9) For *i* = 1: *N*_{1}(10) Select *Nbest* for every particle;(11) Update velocity using equation (12);(12) Update position using equation (2);(13) End for(14) Constructing shoddy population *N*_{2} using the top 50% of particles;(15) Constructing VN small-world network;(16) For *i* = 1: *N*_{2}(17) Select *Nbest* for every particle;(18) Update velocity using equation (12);(19) Update position using strategy 2;(20) End for(21)End while

#### 4. Experiment

This study is performed on a computer with a Win_64 bit, Intel (R) core (TM) i5-6300hq CPU@ 2.30 GHz and 8 GB RAM. MATLAB r2019a was used to carry out the relevant experimental work, and the results were discussed and analyzed. Section 4.1 introduces the benchmark functions used in the experiment and the parameter settings of the comparison algorithms. The diversity of VN small-world network and NW small-world network is analyzed in detail in Section 4.2. Section 4.3 analyzes the effectiveness of the strategy. In Section 4.4, the solution accuracy of the proposed algorithm is verified in 30 and 50 dimensions respectively. In Section 4.5, the proposed algorithm is compared from the perspective of convergence.

##### 4.1. Benchmark Functions and Parameter Settings

In order to verify the robustness of the MPNLPSO algorithm in solving complex problems, twenty widely used benchmark functions (ten unimodal functions and ten multimodal functions) are solved in this study. In addition, this study compares the MPNLPSO algorithm with seven other well-known particle swarm optimization algorithm variants, including BLPSO [35], CLPSO [36], SLPSO [37], ACPSO [37], SFAPSO [17], SWPSO-I [38], and RTPSO [39]. Where BLPSO, CLPSO, SLPSO, and ACPSO are four PSO variants on learning strategies, and SFAPSO, SWPSO-I, and RTPSO are three PSO variants on network topology. Table 1 clearly describes the names and upper and lower bounds of the benchmark functions. Table 2 shows the parameter settings of all algorithms. In order to ensure the fairness, the population size of all algorithms is set to 100, the dimensions are 30 and 50 respectively, and the maximum evaluation time is *D* × 1e4. Other parameters are set according to the original values.

##### 4.2. Diversity Analysis of Small-World Networks

This section uses the method described in Section 2.5 to calculate the diversity of VN small-world network and NW small-world network. Both of the two small-world networks are used to calculate twenty benchmark functions, with the population size of 100, the dimension of 30, and the iteration number of 2000. By solving each benchmark function 30 times, calculating the average distance from the center of the population to the surrounding particles, the average results of 30 runs are obtained and the data are listed in Table 3.

As we can see from Table 3, in terms of unimodal test functions, NW small-world network has greater diversity than VN small-world network in *f*4 and *f*5 functions, and shows better convergence ability in other eight test functions. As for multimodal functions, NW small-world network shows good diversity in *f*11, *f*12, *f*18, and *f*19 functions, and VN small-world network shows good diversity in the other six multimodal functions. On the whole, the results of fourteen functions of VN small-world network are better than that of NW small-world network, that is, VN small-world network shows better diversity in particle swarm optimization algorithm, and the convergence ability of NW small-world network is slightly better than that of VN small-world network.

##### 4.3. Effectiveness of the Strategy

This part experiments on the effectiveness of the strategy, which is divided into five combinations. Comb. 1 is the standard particle swarm optimization algorithm, Comb. 2 is a multipopulation strategy, Comb. 3 uses the NW small-world network to build the neighborhood topology, Comb. 4 uses the VN small-world network to build the neighborhood topology, and Comb. 5 is the algorithm proposed in this study. All combinations were tested on 30 dimensions. Each combination ran 30 times independently to get the mean and standard deviation. It can be clearly seen from Table 4 that the experimental effect has not been improved when Comb. 2 does not use the network topology to build the neighborhood. Comb. 3 and Comb. 4 can significantly improve the result of *f*5 function without using the multipopulation strategy, and can reach the actual optimal value, but not all functions have improved. The convergence accuracy of Comb. 4 on *f*13 function is better than that of other combinations. Comb. 5, that is, the algorithm proposed in this study shows excellent optimization ability in most functions, which shows that there can be an effective cooperation between learning strategies and significant improvement in the convergence accuracy of the algorithm.

##### 4.4. Comparison of PSO Algorithm Variants

This section compares MPNLPSO with six well-known PSO variants in 30 and 50 dimensions, respectively. The population size is 100, and the maximum number of evaluations is 10000 × *D*. In order to show the calculation error of each algorithm, each function is calculated 30 times, and then the minimum, average, and standard deviation are recorded. Tables 5 and 6 list the comparison results of the six well-known PSO variants on the benchmark test functions. It can be seen that each function contains three rows of values that are the minimum value, the average value, and the standard deviation.

It can be seen from the results of unimodal function in Table 5, that SFAPSO can converge to the actual optimal value on the *f*1 function. In the 30 results of the solution, MPNLPSO can converge to the actual optimal value, but it is not stable enough. On the *f*2 function, all algorithms fall into the local optimum. On the *f*3 function, BLPSO, SLPSO, and MPNLPSO can all converge to the actual optimal value, but compared to the other two algorithms, MPNLPSO is more stable. On the *f*4 function, BLPSO, SLPSO, and SFAPSO all present excellent optimization capability, especially, SFAPSO can converge to the actual optimal value and is relatively stable, and BLPSO and SLPSO show high accuracy, almost reaching the actual optimal value. On the contrary, as for *f*4 function, the performance of MPNLPSO is worse than BLPSO, SLPSO, SWPSO-I, and SFAPSO. As for the functions *f*5–*f*10, MPNLPSO has shown excellent optimization ability, converging to the actual optimum on both minimum and average value with good robustness. Among the *f*5–*f*10 functions, SLPSO can only converge to the actual optimal value on two functions. SWPSO-I and SFAPSO can converge to the actual optimal value on the *f*5 and *f*10 functions, but the stability of SWPSO-I on the *f*5 function is not as good as that of SFAPSO and MPNLPSO. Similar to SWPSO-I, ACPSO and RTPSO can converge to the actual optimal value on the *f*5 function, but the stability is not good enough.

It can be seen from the results of multimodal function in Table 5 that on the *f*11 function, the optimization effect of all algorithms is not good, and MPNLPSO is relatively better than all other algorithms. On the *f*13 function, SFAPSO can converge to the actual optimal value in all the 30 runs, but it can be seen from the average values that the stability is ordinary. The average values of ACPSO on the *f*13 function are slightly better than the other six algorithms, indicating that, to some extent, it has certain advantages in this function. It can be seen from the *f*14 function that, except for MPNLPSO, which exhibits excellent optimization capabilities, other algorithms all fall into local optima. Regarding *f*15 function, MPNLPSO is better than the other six algorithms in both the minimum and average values, although it fails to find the actual optimal value, it has a certain degree of competitiveness compared to other algorithms. Concluded from the results of functions *f*16–*f*20, MPNLPSO can converge to the actual optimal value, and it shows quite competitive stability. Especially on the *f*20 function, when other seven algorithms all fall into the local optimum, MPNLPSO can effectively jump out of the local optima and achieve the goal of finding the best.

From the results of unimodal function in Table 6, it can be obtained that MPNLPSO cannot converge to the actual optimal value on the *f*2 and *f*4 functions, but succeeds on the remaining eight functions. However, the average value of the *f*1 function indicates that MPNLPSO’s optimizing ability on the *f*1 function is not stable enough. On the *f*2 function, all algorithms fall into local optima. On the *f*3 function, BLPSO and SLPSO present the same excellent optimization capability as MPNLPSO, but relatively, SLPSO is less stable than the other two comparison algorithms. In particular, on the *f*4 function, SLPSO, SWPSO-I, and SFAPSO can all converge to the actual optimal value, and the optimization ability is stronger than MPNLPSO. ACPSO, RTPSO, SWPSO-I, and SFAPSO all have excellent optimizing capabilities on the *f*5 function, but the stability of SWPSO-I is slightly worse. SLPSO can reach the actual optimal value on the *f*9 and *f*10 functions. SWPSO-I, RTPSO, and SFAPSO also show certain competitiveness on the *f*10 function.

From the results of multimodal function in Table 6, it can be seen that optimization capabilities of all algorithms are ordinary on the *f*11 function, while MPNLPSO can reach the actual optimal value on the *f*12 function with a certain degree of stability. On the *f*13 function, BLPSO’s optimization ability is slightly stronger than other comparison algorithms, but it still cannot converge to the actual optimal value. What interests us is that MPNLPSO presents better robustness in *f*14 and *f*16–*f*20, and both the accuracy and stability of the solution are stronger than other algorithms. On the contrary, other comparison algorithms do not converge to the actual optimal value on the *f*16–*f*20 functions. Although SLPSO does not find the actual optimal value on the *f*18 function, its accuracy is high and it has a certain degree of stability. The other seven algorithms still fail to jump out of the local optimum on the f20 function, while MPNLPSO can easily jump out of the local optimum and reach the actual optimal value.

Combining Tables 5 and 6, we can conclude that MPNLPSO has presented certain competitiveness in solving both low-dimensional and high-dimensional complex optimization functions. Other comparison algorithms also have certain optimization capabilities, but regardless of overall optimization effect or stability, MPNLPSO is stronger than them.

In addition, this study uses two nonparametric tests with a significance level of 0.05, namely Wilcoxon rank sum test and Friedman test. For Wilcoxon rank sum test, “+” “−” “≈” in Tables 7 and 8 respectively indicate that the solution accuracy and stability of MPNLPSO are better than, worse than, and almost equal to the comparison algorithms. Tables 9 and 10 clearly show the ranking obtained by Friedman test.

As can be seen from Table 7, MPNLPSO has higher solution accuracy than BLPSO on fifteen functions and better performance than CLPSO on nineteen functions. SLPSO algorithm is excellent. The performance of four functions is equivalent to that of MPNLPSO, and three functions are better than MPNLPSO, but the solution accuracy of thirteen functions is inferior to MPNLPSO. SFAPSO has the same solving ability of three functions as MPNLPSO, and the performance of two functions is better than MPNLPSO. In Tables 8 and 10, the result of the Friedman test of MPNLPSO is 1.65. In Table 8, SLPSO ranks second, which is 2.75. BLPSO ranks third, which is 2.85. In Table 10, both BLPSO and SLPSO rank second. To sum up, the experimental analysis and comparison show that the proposed algorithm has strong competitiveness in both solution accuracy and convergence speed.

##### 4.5. Convergence Analysis of the Comparison Algorithms

This section analyzes the convergence of MPNLPSO and seven other comparison algorithms. Figures 5 and 6 show the convergence curves of 30-dimensional unimodal and multimodal functions, and Figures 7 and 8 show the convergence curves of 50-dimensional unimodal and multimodal functions. It can be seen from Figure 5 that MPNLPSO can quickly jump out of the local optimum after falling into the local optimum in *f*1 function, but it falls into the local optimum in the later stage, and the final optimization effect is inferior to SLPSO. The *f*4 function can be solved quickly to a certain accuracy in the early stage, but SLPSO can achieve better accuracy in the later stage. In addition, MPNLPSO can quickly converge to the actual optimal value. It is precise because MPNLPSO can generally search the actual optimal value within the first 10 iterations, so the continuation of the convergence curve may not be clearly seen in the figure. Similarly, it can be seen from the multimodal function in Figure 6 that MPNLPSO has a very poor convergence effect on *f*13 function and shows good convergence ability on other multimodal functions. This shows that each complex optimization problem has different characteristics, and an algorithm cannot achieve very high solution accuracy and convergence speed for all complex functions. As an old saying goes, there is no free lunch in the world, and the algorithm proposed in this study cannot solve all complex optimization functions perfectly. As can be seen in Figures 7 and 8, MPNLPSO has no reduction in the convergence ability in 50 dimensions, and also has high solution accuracy and convergence speed.

The reasons why MPNLPSO has good robustness are summarized in the following aspects. Firstly, this algorithm divides the particles into two populations according to their fitness value for the purpose of cooperation, and a dynamic opposition-based learning strategy is proposed to activate the particles trapped in search stagnation in the shoddy population. Secondly, according to the different characteristics of the particles, corresponding learning strategies are adopted to enhance the abilities of different particles, which will contribute to the searching process. Thirdly, the small-world network topology is used to build a neighborhood for particles. By learning from the neighborhood optimal particles and getting rid of the situation of learning only from the global optimal particle, the risk of falling into local optimization can be reduced to a certain extent. Finally, in view of the fact that the historical optimal position of particles and the neighborhood optimal position of particles also have the possibility of falling into local optimization, this study takes advantages of Lévy flight to randomly disturb them to increase the diversity of these two particles, so that the particle swarm can maintain good diversity from the beginning to the end of the search. To sum up, MPNLPSO has taken corresponding measures to strengthen both global exploration and local exploitation, effectively improving the solution accuracy of particle swarm optimization.

#### 5. Conclusion

In this study, a multipopulation particle swarm optimization algorithm with neighborhood learning is proposed to improve the solution accuracy and convergence speed. As we all know, the reason why standard particle swarm optimization algorithm is easy to fall into local optimization is that all particles only learn from the global optimal particles. In order to get rid of this situation, this study proposes a small-world network neighborhood learning strategy, which makes the particles learn from the neighborhood optimal particles by constructing a neighborhood for each particle. At the same time, in view of the quality of the position of particles in the search process, this study adopts different small-world network neighborhood topologies for different populations, for the purpose of cooperation. For the shoddy population, a dynamic opposition-based learning strategy is proposed to activate the particles trapped in search stagnation. In addition, in order to maintain a certain diversity of the population from beginning to end, Lévy flight is used to randomly perturb the historically optimal and neighborhood optimal particles. In this study, twenty benchmark functions are tested, including ten unimodal test functions and ten multimodal test functions. The MPNLPSO algorithm can search the actual optimal value on most test functions with good stability. In addition, through rank sum test and Friedman test, MPNLPSO ranks first on the whole. At the same time, the convergence ability of MPNLPSO is proved by the convergence curve. Experimental results show that MPNLPSO effectively improves the solution accuracy and the convergence speed of the algorithm with certain robustness.

Although the algorithm proposed in this study has a certain improvement effect, it still has challenges in solving more complex problems. In the next step, we will focus on the analysis of other network topology characteristics and effectively utilize the unique characteristics of different network topologies, so that the algorithm can effectively solve more complex optimization problems. [40].

#### Data Availability

All data can be obtained by contacting the corresponding author’s e-mail.

#### Conflicts of Interest

The authors declare that they have no conflicts of interest.