Mathematical Problems in Engineering

Mathematical Problems in Engineering / 2019 / Article

Research Article | Open Access

Volume 2019 |Article ID 8164083 | 22 pages | https://doi.org/10.1155/2019/8164083

Improved Adaptive Holonic Particle Swarm Optimization

Academic Editor: Oliver Schütze
Received02 Jul 2019
Revised19 Oct 2019
Accepted20 Nov 2019
Published12 Dec 2019

Abstract

For the first time , the Holonic Particle Swarm Optimization (HPSO ) algorithm applies multiagent theory about the improvement in the PSO algorithm and achieved good results. In order to further improve the performance of the algorithm, this paper proposes an improved Adaptive Holonic Particle Swarm Optimization (AHPSO) algorithm. Firstly, a brief review of the HPSO algorithm is carried out, and the HPSO algorithm can be further studied in three aspects: grouping strategy, iteration number setting, and state switching discrimination. The HPSO algorithm uses an approximately uniform grouping strategy that is the simplest but does not consider the connections between particles. And if the particles with larger or smaller differences are grouped together in different search stages, the search efficiency will be improved. Therefore, this paper proposes a grouping strategy based on information entropy and system clustering and combines two grouping strategies with corresponding search methods. The performance of the HPSO algorithm depends on the setting of the number of iterations. If it is too small, it is difficult to search for the optimal and it wastes so many computing resources. Therefore, this paper constructs an adaptive termination condition that causes the particles to terminate spontaneously after convergence. The HPSO algorithm only performs a conversion from extensive search to exact search and still has the potential to fall into local optimum. This paper proposes a state switching condition to improve the probability that the algorithm jumps out of the local optimum. Finally, AHPSO and HPSO are compared by using 22 groups of standard test functions. AHPSO is faster in convergence than HPSO, and the number of iterations of AHPSO convergence is employed in HPSO. At this point, there exists a large gap between HPSO and the optimal solution, i.e., AHPSO can have better algorithm efficiency without setting the number of iterations.

1. Introduction

Particle swarm optimization (PSO) has been successfully applied to many optimization problems. Due to its particular superiority in tackling the nonconvex, nondifferentiable, and even not well-defined problems, evolutionary algorithms have attracted increasing interests and achieved good development in both academia and industry in the past several decades [17]. PSO algorithm is conducive to machine optimization [8, 9], energy management [10], the accurate prediction and optimization of cogging torque [11], and other engineering aspects. Besides, it also helps to solve minimum set covering problems [12], the obnoxious p median problems [13], and other academic problems. Yet, PSO is premature and prone to local optimization. Scholars have conducted in-depth research on this, and they have achieved fruitful results.

The major research directions are stratified into three types. The first is to dynamically adjust the parameters of PSO search to change the search strategy in line with stages[14]. Since the PSO algorithm’s parameters affect the performance of the algorithm, Hu et al. [15] studied the automatic control mechanism to ensure that better parameters are obtained during each iteration. Yet, such method can be calculated only for differentiable functions, which limits its application scope. Bai et al. [16] focused on the study of feasible search space that should be solved urgently to reduce the search space of PSO algorithm so as to accelerate the solving process of multiobjective optimization problem. This method is conducive to solving high dimensional complex problems, but it needs to be further analyzed and modeled. This method can be used only after the constraint conditions are transformed, and the improvement degree of PSO algorithm is limited.

Pham et al. [17] built the low-discrepancy sequence of initialized particle swarm optimization with high-order nonlinear time-varying inertia weight (LHNPSO) algorithm to achieve the segmentation of magnetic resonance imaging. Yan et al. [18] introduced a request and reset strategy to reset low-fit particles, making the reset particles form the best group around the optimal particles, improving search results. Yet, such algorithm will increase the probability of falling into local optimum during the optimization. Jordehi [19] constructed time-varying weight coefficient and adaptive adjusted particle search strategy, which provides feasible plan for estimating circuit model parameters of PV cells/modules. Chen et al. [20] employed fitness function to dynamically and adaptively adjust particle weight and effectively improved the search mode of particles at different times. However, the order of magnitude and variation of fitness function may vary with problems. Such a method needs further improvement.

The second is to combine the PSO algorithm with other heuristic algorithms to supplement each other. Ghorbani et al. [21] built GA-PSO algorithm by incorporating PSO and Genetic Algorithm (GA) to optimize the sizing of an off-grid house with photovoltaic panels, wind turbines, and battery. Accordingly, the favorable results are obtained. This method stratifies the population into two groups to perform PSO and GA operations, respectively. In each iteration, it improves the best position by comparing the optimal results of the PSO algorithm and GA. In fact, each step of the algorithm is running independently, and the results are always those of GA modified PSO. If the performance function of GA is constantly poor, GA has no effect and consumes numerous computing resources. The optimization algorithm is virtually the only PSO processing.

Goudarzi et al. [22] and Ran and Mesut [23] incorporated artificial bee colony (ABC) and PSO, thereby improving the performance of PSO algorithm. Gupta and Saini [24] and Liu et al. [25] used fuzzy theory to improve PSO algorithm, which effectively solved the problems to be solved. This method should set the fuzzy function, whereas the fuzzy function may vary with optimization problem, which impacts the algorithm performance. Jiang et al. [26] employed wavelet mutation to reduce the probability of PSO falling into local optimum, whereas different wavelet bases were selected for different problems, which may result in the difference in performance. This algorithm can be further improved. Chen et al. [27] initialized the group using opposition-based learning (OBL), increased the population diversity, and reduced the possibility of PSO particles to fall into local optimum using the properties of ergodicity, nonrepetition, and irregularity of SCA.

The third is to improve the search performance by changing PSO’s particle topology. Issa et al. [28] conducted the ground search in accordance with the concept of hierarchy and the favorable exploitation of prominent regions of sine cosine optimization (SCA) so as to improve the accuracy of search. It conducted the top search using the excellent exploration of the search space of PSO to increase the population diversity. Through these efforts, the Adaptive SCA-PSO algorithm is established, thereby balancing exploration and exploitation and gaining favorable results. Because of the randomness of the initial population, the grouping strategy will inevitably affect the efficiency of the algorithm. This problem is not the focus of literature [28], and there is less discussion on the updating method of parameters in SCA. Thus, such an updating method has more space for further study.

Based on orthogonal experimental design, Zhan et al. [2931] increased particle diversity and search rate. Yet, this method holds only when no or weak interaction among the factors exists, and the orthogonality may not be likely to achieve in complex problems. In this regard, Al-Bahrani and Patra [32] made great efforts to solve the above problems by building a novel orthogonal PSO algorithm based on orthogonal diagonalization. Besides, 30 standard functions are tested to verify their performance. However, the discussion and its proof process in literature [32] suggest that it is necessary to ensure that all vectors in the d-dimensional search space are linearly independent, thus limiting the applicable scope of the algorithm. P. Liu and J. Liu [33] and Tan et al. [34] employed multileader methods to improve the topological structure of particles, thereby improving the algorithm efficiency. Thus, this literature effectively guides the subsequent research.

Scholars have conducted in-depth research on this problem from different perspectives. Among the method of changing the topology, literature [1] referenced Holonic Particle Swarm Optimization (HPSO). Multiagent systems have now achieved widespread success in many fields but have never been applied to improved PSO algorithms. Because the PSO algorithm particles themselves do not have the ability to learn and evolve autonomously, they are not agents. And if you increase the intelligence level of PSO particles, you will sacrifice the speed of the PSO algorithm. Literature [1] improved the PSO algorithm by holonic approach and improved the topology of particles. For the first time, multiagent theory was applied to the improved PSO algorithm. By designing the hierarchical way, the ability to globally search the particles is improved according to the interaction between the levels and different levels of information. The constraint on the problem is reduced, and there is no explicit limit to the objective function, which broadens the applicability of the algorithm. And the performance of the algorithm has been greatly improved.

The method of the literature [1] has three points that can be improved, namely, the grouping mode, the search state switching condition, and the setting of the number of iterations.(1)The literature [1] uses a random grouping method. This paper proposes a grouping strategy based on information entropy and system clustering and combines two grouping strategies with corresponding search methods. This method improved search performance.(2)The search method of the literature [1] is to directly convert to the precise search after performing a fixed number of range searches. Based on the fitness function gradient, this paper constructs adaptive switching and reverse switching strategies to achieve adaptive mutual conversion between range search and precise search. The method is more reasonable.(3)The algorithm of [1] is to set the number of iterations. It is very likely that the algorithm has converged and the algorithm is still running. This paper constructs an adaptive termination condition, which causes the particles to terminate spontaneously after convergence, saving computational resources and time.

Structure of this study is organized as follows: in Section 2, the main work of literature [1] is briefly reviewed; in Section 3, the places that can be improved in literature [1] are discussed; in Section 4, a novel grouping strategy is established; Section 7 verifies the effectiveness of the proposed method through simulation and compares the proposed method with mainstream method to highlight its advantages; and Section 8 draws a conclusion and summarizes the whole paper.

2. Review of Holonic Particle Swarm Optimization

This section begins with a brief review of the holonic structure of [1]. The principle of the HPSO algorithm is then discussed. Finally, the architecture and process of HPSO algorithm are introduced.

2.1. Holonic Structure

Multiagent system was used because the agent has the ability to learn and evolve and can emerge the ability that the monomer does not have through communication, cooperation, competition, etc., that is, realize group intelligence. This system has become a research hotspot in today’s academic circles and has been widely used in many scientific fields. At present, there are two main ways to improve the capabilities of multiagent systems. One is to study the ability to improve the acquisition, processing, and feedback of individual devices. The other is to build and improve the topology of the system from the perspective of the system and improve the ability of system information transmission and interaction, thereby improving the overall efficiency of the system. The holonic structure is a special topology. By affecting the role, interaction, and organizational behavior of the agent system, the original simple system can generate complex capabilities and achieve more powerful functions.

Holonic approach is illustrated in Figure 1.

Figure 1 shows a three-layer holonic structure consisting of 11 agents. On the Level 1 layer, 11 agents are divided into three groups, and each group of agents only exchanges information within the group. Through information exchange, the group forms a virtual agent on the Level 2 layer. The virtual agent grasps all the information of the agent in the corresponding group of the Level 1 layer and obtains the optimal information of the group. Similarly, the three virtual agents of the Level 2 layer form a new group and interact with each other, thus forming a global agent at the Level 3 layer and grasping all the agent information.

If we do not build a holonic structure, then 11 basic agents’ information are needed to perform real-time information interaction, which requires each agent to have strong information transmission, storage, and processing capabilities. In this case, each node stores global information. Each node solves its own optimal state according to the global information. The calculation amount of this process is not only extremely large but also the calculated optimal state at the next moment is calculated based on the global information and its own state. For a single agent, the state at the next moment is single or locally optimal. As far as the whole system is concerned, each basic agent is in an optimal state and may not be able to guarantee the overall optimality. For example, in a radar networking system, a point in an area is less affected by the terrain, and the point is best detected, but if all radars are deployed at that point, the overall performance will be extremely low. At the same time, as the number of nodes increases, the amount of calculations increases exponentially. The holonic structure is controlled by layer management, and the high-level virtual agent controls the area information, and the number of optimizations is small. And from the perspective of the particular region or even the whole region, the state of the agent can be planned to obtain a global optimal solution. As the number of nodes increases, the holonic structure only increases the number of packets, and the amount of calculation will only increase exponentially.

Holonic structure has better flexibility and robustness than hierarchical management. The virtual agent generated by the holonic structure can be dynamically adjusted according to the state of the underlying agent, similar to a mechanism for regional negotiation. In the hierarchical management mode, the relationship between the high-level manager and the corresponding lower-level agent is fixed, and it is difficult to switch nodes, and the flexibility is poor. At the same time, in the hierarchical management mode, once the high-level node fails, the node and the underlying agent are all unable to work and the overall performance is significantly reduced. The holonic structure’s high-level node is virtual, and there is no possibility of failure.

2.2. Principles of HPSO Algorithm

As scholars have performed more and more in-depth research on multiagent systems, more and more results have emerged. And the multiagent system and PSO algorithm have many similarities in architecture, but no scholar has applied the theory in the agent to the improved PSO algorithm. There are two main reasons. One is that in the PSO algorithm, particles are not agents, and they do not have the ability to learn autonomously and complexly. They cannot be considered as multiagent systems. The other is because the PSO algorithm is extremely simple to operate and has strong real-time performance. If the intelligence of the particle is deliberately increased, the complexity of the algorithm will be greatly increased, resulting in a decrease in the calculation speed. It is therefore difficult to directly apply the theory of the agent system to the improved PSO algorithm.

The literature [1] is to improve the topology of PSO particles by using holonic structure in multiagent systems. For the first time, the multi-intelligent system theory was applied to the improved PSO algorithm, and the HPSO algorithm was proposed. To briefly discuss the principle of the HPSO algorithm, it is assumed that 9 particles are used to optimize the optimization problem. The principle of improving the PSO algorithm based on holonic structure is shown in Figure 2.

The first 9 particles in Level 1 are divided into three groups of G1, G2, and G3. The particles in each group calculate the corresponding fitness function according to their own state information. The particles with the best fitness function in the group are promoted to the virtual leader of the group and upgraded to Level 2. After that, the three groups of optimal particles P1, P2, and P3 are constructed into a new group, and the particles with the best fitness function are upgraded to Level 3 as a new leader. In the standard PSO algorithm, the particle state is only affected by three factors: its own speed, its own historical optimality, and global historical optimum. In the HPSO algorithm, the particle state will also be affected by the optimal solution at each level.

This method has obvious advantages. The first is to increase the diversity of the system and reduce the possibility of particles falling into local optimum and premature convergence by increasing the effect of local optimal values on the particle state through a layered architecture. Secondly, after increasing the local optimal layer, the contradiction between the search range and the search accuracy can be balanced. That is, each group is dispersed as much as possible in different areas to ensure a sufficiently large search range. Each group performs a detailed search locally, which improves the accuracy of the search. At the same time, when the search range is large, the particles of the PSO algorithm will oscillate between the individual optimal and the global optimal. HPSO can effectively alleviate this phenomenon by increasing the local optimum. Finally, combined with the simulation results in [1], it can be seen that the HPSO algorithm has obvious advantages over other improved PSO algorithms.

2.3. HPSO Algorithm Architecture and Process

The algorithm architecture and main flow of HPSO in [1] can be simplified as shown in Figure 3.

The above process can be summarized as follows:Step 1: initialize the PSO algorithm, the population number n, the number of iterations N, and the regrouping period Tr.Step 2: build the original holonic structure. Set the number of layers in the structure and the number of groups per layer and randomly group the particles.Step 3: the first 0.8N iterations are mainly to achieve large-scale search, in order to prevent particles from falling into local optimum. Each time a regrouping period Tr is entered, the particles are regrouped. The principle of grouping is to take two particles in the group after grouping. These two particles can come from different groups; otherwise, they are regrouped.In the first 0.8N iterations, the particle parameters are adjusted using equation (1) to obtain the optimal fitness function.where Vel is particle’s velocity, t is the iteration number, ci is acceleration coefficient in level i, and ri is random vector in level i that is uniformly distributed in the range [0, 1]. levelNumber shows number of levels in holarchy. In level number one, pBesti is the personal best position of particle i. In other levels, pBesti is the best position of each group of holarchy. particlePosition shows position of each particle. is called inertia weight. Positiont+1 denotes the position of t + 1th generation’s particle.Step 4: after the first 0.8N searches, it is basically possible to determine the range in which the optimal solution may appear. Therefore, the latter 0.2N searches are mainly for searching for the optimal solution in a small range. Therefore, it is not necessary to regroup, and the fine search is performed using the following formula:The meaning of the parameters is the same as in Step 3.Step 5: get the optimization results of the HPSO algorithm.

The above process is the HPSO optimization process. Since the literature [1] is the first to use the holonic structure to improve the PSO algorithm, there is still room for further improvement in some aspects. Therefore, this paper carries out subsequent research.

3. Improvement of Studies

Literature [1] combined the idea of an agent with PSO and considered PSO not only as a single point but also as an intelligent particle interacting between levels, thus improving the efficiency of the algorithm.

3.1. Grouping Strategy of Particles

In literature [1], the author constructed the particle groups by approximate bisection, which may be the most convenient but not necessarily the best. Also, in the conclusion of [1], the authors also discussed that “It is also possible to create groups by special rules to achieve specified purposes.” In this regard, this study proposes two completely opposite grouping strategies based on system clustering and information entropy and discusses and compares these two strategies.

The system clustering method calibrates the similarity degree of particles according to the distance between particles. Also, the effect is intuitive and obvious. By means of system clustering, particles with close distance or high similarity can be divided into groups. This grouping method can obtain local regions constructed by similar particles, and it is easy to find the local optimization of a certain region. In this way, the optimization problem is transformed into dynamically adjusting the grouping method and searching for local region optimization, and the entire optimization can then be conducted.

Entropy is a measure of microstate diversity or uniformity in thermodynamics. Information entropy is a concept in information theory, used to describe the probability of different states appearing in a system. This suggests that the greater the entropy is, the greater the disorder will be, and the more possible states will occur. As a result, the probability of the particles falling into global optimization is smaller. This way of grouping is to make the maximum entropy and maximum information in each group after determining the number of groups, i.e., dividing the most distant particles into a group. In such a way, the total entropy of the system will become the maximum, which significantly reduce the possibility of the system falling into local optimum.

As no free lunch suggests, the improved intelligent algorithm is to balance the exploration of the search space (diversification) and the exploitation of prominent regions (intensification). From the above discussion, it can be seen that the grouping method based on system clustering is to improve the local retrieval ability, i.e., the intensification. The method based on information entropy is to make the system more detectable space, i.e., to improve the diversification. Thus, both approaches are strategies for improving grouping, but the goals are completely opposite. Theoretical analysis and experimental verification of the optimized performance of the two methods will be provided later. Because of this contradiction, this study applies the two methods to different stages of search strategy to improve the efficiency of the algorithm.

3.2. Setting of Number of Iterations

In literature [1], the authors set two search strategies for alternation and confirmed that in the iteration process, the first 80% iterations adopt a particle motion state update method through a lot of experimental simulation and analysis. This suggests that the influences of group leaders in lower levels are more than those that are in higher levels, and the ability of detection is improved partially. During the last 20% of iterations, the impact of high-level particles becomes increasingly big, turning into another method to update the motion state of particles.

As literature [1] states, “We have divided number of iterations to 60%–40%, 70%–30%, 80%–20%, and 90%–10%. These divisions were tested on some randomly selected functions. The best results were obtained when iterations were divided to 80% and 20%.” The authors have done a lot of work and given the experimental parameters which can get better results.

From the above discussion, it can be seen that there are two limitations in the scheme. The first one is that the algorithm in this paper has a proportion of 80%–20%, which directly leads to the number of iterations set directly related to the algorithm performance. The other is the 80%–20% is only the optimal solution of the 4 groups of experimental parameters in [1]. That is, the ratio of the two search methods, or the switching time of the two search strategies, is highly correlated with the performance of the algorithm after the number of iterations is determined. However, whether the proportion of 80%–20% can make the algorithm to be optimal and whether the proportion is applicable to all methods remains to be further studied.

If a problem is solved by cluster intelligence algorithm, it must be a NP-Hard problem. It is almost unlikely to acquire the number of iterations by empirical equations or rough estimates. The accuracy of the calculations will vary slightly with one more dimension or one less, and the number of iterations us likely to vary completely, let alone solve different problems. For different problems and functions, the parameters of the algorithm itself are likely to be significantly different. Therefore, whether the proportion of 80%–20% obtained by simulation in literature [1] is suitable needs further investigation.

To decide whether to follow the concept of literature [1], the number of iterations should be first determined, and then the ratio of two algorithms should be confirmed. The ultimate goal of this approach is to determine the switching time between the two search strategies. If better results can be obtained, it is not necessary to proceed according to literature [1]. More importantly, the two methods in literature [1] only switched once, whereas this study combines the fitness function value, builds the adaptive adjustment mode of search strategy, and breaks the mode in literature [1]. According to the fitness value of particle swarm in the search process, the switching between the two search strategies can be realized by self-adjusting the search strategy, and the optimization of the problem can be achieved.

3.3. Switch and Termination of Self-Adaption

The above discussion suggests that the algorithm efficiency of reference [1] is associated with the number of iterations. If the number of iterations is set excessively high, the computation of the algorithm will be inevitably more complex. However, if the number of iterations is set too low, the algorithm will end prematurely and probably not converge to the optimal solution. In the meantime, the number of iterations of the algorithm may vary significantly with problems and fitness functions, computational accuracy, and requirements. It is difficult to estimate the number of iterations of the algorithm for a specific problem without any prior knowledge or experimental basis, which severely limits the applicable scope of the algorithm.

There are two reasons to determine the number of iterations of the algorithm. The one is to determine the number of operations of the two search strategies, and the other is to determine the termination conditions of the algorithm. For the first reason, the above discussion has provided a feasible solution. The termination condition of the algorithm seeks to meet the requirements for the results in addition to reaching the number of iterations, i.e., the fitness function no longer decreases or reaches the predetermined calculation accuracy.

Due to different problems, it is hard to determine the decline process and trend of fitness function. To this end, this study builds an adaptive threshold and combines previous grouping strategies. Based on the comparison of different grouping results, the algorithm’s termination conditions are constructed to determine the conversion time and termination conditions.

4. Grouping Strategy

4.1. Grouping Mode Based on Hierarchical Clustering Method

When the clustering begins, take n elements (particles) with m dimensions as one class and specify the distance between objects and the distance between classes. The two classes closest to each other are then merged into a new class (the union class), and the distance between the new class and the other classes is calculated. Repeat the merging of the nearest class, each time reducing one class until all the elements are merged into one. Finally, a relation map (cluster tree diagram or pedigree diagram) is formed. It can be clearly seen from the diagram that there should be several categories and the elements contained in each category.

In the analysis of clustering, the class is usually represented by G. Assume that there are m elements in G, denoted as the column vector of xi (i = 1, 2, … , m). dij denotes the distance between xi and xj. DKL is the distance between GK and GL. Using different methods to define distance, different system clustering methods are generated. Here, the most mature and easy way to implement the short-distance method is introduced (Single Linkage Method).

Define the distance between classes as the distance between the two closest elements; it is expressed as

If GK and GL are clustered as a new group in a certain step, this group is marked as GM. Hence, the distance of GM and any existing group GJ is expressed as

The procedures of hierarchical clustering method based on the Single Linkage Method is as follows:Step 1: each of the initial elements is recognized as a class, and the distance between the elements is specified. The Euclidean distance is usually used to calculate the distance matrix D(0) of n element(s).Step 2: find the minimum element of D(0) and set it as DKL. Cluster GK and GL as a new group and mark this group as GM, i.e., GM = {GK, GL}.Step 3: calculate the distance between new group GM and any group GJ using recursion formula, which is expressed asModify the distance matrix D(0) and combine the row and column of GK and GL as a new row and column corresponding to GM. The distance between the new row and column is calculated by equation (5), and the rest values of the row and column remain unchanged, thereby obtaining the new matrix D(1).Step 4: D(1) is processed in the same way to get D(2), which is performed successively until all elements are merged into a preset number of groups.

The initial clustering of particles can be realized through the above process.

4.2. Grouping Mode Based on Information Entropy

Entropy is a measure of the diversity or uniformity of microscopic states in thermodynamics, representing the degree of disorder of a system. Information entropy was first proposed by Shannon in 1948 to describe the degree of disorder of information. The disordered degree of information is negatively correlated with the information’s utility, and the information entropy can be used to measure the extent to which the evaluation object contains effective information.

The classical description of information entropy is assume there exist n elements, m dimensions, and the matrix of original data is . The original data are unitized; it yields

The unitized matrix is . Calculate the information entropy of each research object Ej:where k = 1/ln n, if pij = 0, pij ln pij = 0.

In this paper, the original formula of information entropy is not used, whereas the concept of information entropy is borrowed and simplified.

On the premise of setting the number of groups, the greater the entropy within a group is, the higher the possibility of the entropy it will have, and therefore the lower the probability that the group of particles will fall into local optimum. Assume that n elements are divided into Q groups and that there are P grouping methods in total, and the information entropy within each group of each grouping method is Ez1, Ez2, … , EzQ. If the sum of information entropy reaches the maximum, it yields

Then, the space explored by this grouping method is the largest, and the probability of falling into local optimum is significantly reduced.

Obviously, equation (8) also pertains to an optimization. If all possibilities are traversed and the amount of calculation is increased, an approximate method is adopted in this paper. The greater the entropy of a group, the weaker the relationship between the particles and the greater the distance between them. In contrast to the system clustering method in the previous section, the system clustering method is to cluster particles with close distance into a class, while the information entropy is to cluster particles with multiple distances into a class.

In this regard, when all the elements in the distance matrix D(0) in the system clustering method are inversely taken and other operations are identical, the particles farthest from each other can be clustered into one class. Also, the calculation is significantly simplified.

4.3. Combination of the Grouping Method and the Search Strategy

Through the above discussion, the grouping criteria of the two methods are totally different. Since there are exactly two search strategies in literature [1], two types of search in literature [1] are called extensive search and accurate search for clear expression.

Extensive search is equation (5) in literature [1], namely, equation (1) in this study. The exact search is equation (8) in literature [1], namely, equation (2) in this study.

Since both search methods involve grouping particles, four results are yielded by combining the two methods proposed in this paper with the two search strategies, as shown in Table 1.


PlanFirst stageSecond stage

Original planExtensive searchAccurate search
Plan 1System clustering method + extensive searchSystem clustering method + accurate search
Plan 2Information entropy method + extensive searchInformation entropy method + accurate search
Plan 3System clustering method + extensive searchInformation entropy method + accurate search
Plan 4Information entropy method + extensive searchSystem clustering method + accurate search

Though new grouping methods are established, if the grouping method is mismatched with the search strategy, it will not only lead to a surge of computation, but to a converge on local optimum, or even make the efficiency of the improved algorithm may not be as effective as that of the original algorithm. Therefore, this paper discusses and analyses the above algorithm.

There is a problem in both plan 1 and plan 3. Extensive search suggests that the larger the search space is, the better it will be; however, the systematic clustering method is on the contrary. The particles are locked into several small local areas, as shown in Figure 4.

For the minimum of Figure 4, if the initial particles are randomly distributed on both sides of the minimum value, it takes a lot of time to search for multiple local optimal values, and the minimum value may be passed in the subsequent migration process. In such a way, the algorithm may get the nonglobally optimal solution. Combined with the system clustering method and extensive search, the algorithm efficiency will decline. Therefore, plan 1 and plan 3 are not ideal.

For plan 2, in the later accurate search, the search strategy of information entropy may make it difficult for particles to aggregate, and the search efficiency of the precise search area is not high, which decreases the algorithm efficiency.

For plan 4, the information entropy method is combined with the extensive search to make the search more extensive and more possible, and the repeated search for the region is almost impossible. While in the later stage, the system clustering method is combined with accurate search to make the search more accurate. Qualitative analysis suggests that both search strategies are strengthened in plan 4, and the overall efficiency of the algorithm is improved. Therefore, in the first part of the simulation, the algorithm efficiency of the four combinations and the original method (totally five methods) is compared, which corroborates the discussion in this paper.

5. Condition Switching Criteria

The improved particle swarm algorithm in reference [1] involves two switching moments of state. The first is the switching time from extensive search to precise search, and the second is the end of the algorithm. In the original paper, the first switching point is studied, and through a lot of simulation experiments and parameter comparison, the most ideal results in the experiment are given, i.e., extensive search is used in the first 80% iterations, and precision is used in the last 20%, leading to three problems:(1)Whether the ratio of 80%–20% is the optimal?(2)By considering only switching from extensive search to precise search once rather than from precise search to extensive search, the likelihood of jumping into local optimization will be reduced.(3)The number of iterations of the iteration is associated with the performance of the algorithm. How to determine the total number of iterations?

In view of the above problems, two adaptive switching criteria are designed in this section. The first criterion is to switch from extensive search to accurate search according to the absolute slope of fitness function in the context of extensive search. The second criterion is to realize the algorithm’s termination or switch to extensive search according to the distance relation of grouping under the precise search condition.

The adaptive strategy allows you to do without presetting the number of iterations. The corresponding ratios and search strategies can be switched. At the same time, the algorithm can be adaptive to terminate, and the termination conditions are strict. Thus, the algorithm is further less likely to fall into local optimum.

5.1. Threshold of Self-Adaption

Based on the above discussion, the particle swarm initially adopts an extensive search strategy based on information entropy method, during which the grouping of particles will not be changed. After a certain number of iterations, the particles gradually aggregate to the region or neighborhood where the global optimal solution is located. In the following process, the search strategy needs to be changed to improve the search accuracy and speed of the algorithm for local areas. At this point, the system clustering-based precision search strategy should be replaced.

In the search process, the value of fitness function decreases as the number of iterations increases. When the local or global optimization is found, the size may be unchanged. The fitness function is not an analytical expression but calculated during the iteration. At the same time, it is impossible to estimate the value of fitness function when the algorithm converges, and for different optimization problems, the order of magnitude of fitness value is different when the algorithm stops.

Based on the change trend from the absolute value of slope of fitness function, it yieldswhere I denotes the number of iterations, Ffit(i) is the fitness function value of the ith iteration, and ki is the absolute value of the change rate of fitness function.

ki can reflect the searching state of particle. At the initial stage, the fitness function value drops rapidly, and there is no need to switch. When the particle is searched and the local or global optimal is gradually found, the fitness function decreases slowly. Besides, the rate of change overall presents a gradually decreasing trend, which can be switched to accurate search. For this reason, the conditions for judging the switch from extensive search to precise search are constructed, namely,where (ki−j − ki)/ki−j denotes the ratio of change of fitness function (kij − ki) to original change of kij after jth iterations. As the particle gets increasingly closer to the optimal solution, the ratio increases, meaning that the falling of particle becomes increasingly slow. Ω is the weight coefficient, normally valued as [0.8, 0.9]. j can be the smaller positive integer. The value of ω and j can be arbitrary since the discriminant condition of reverse switching will be discussed later, even if the initial value is not well taken. It is also possible to switch back through the reverse switching strategy, which is complementary to each other to improve the optimization efficiency, which is also a major advantage of this method.

Given that the case may be trapped in local optimum during the search, i.e., the fitness function may be 0 and the denominator of equation (10) fails. Thus, in the actual optimization process, equation (11) is employed for discrimination:namely, the multiplication. It can be switched if it meets equation (11).

However, the theoretical analysis and experimental results suggest that equation (11) is also met when i − 2j subiteration approaches the local optimal, the i − j subiteration is the local optimal or approximate local optimal, and the ith subiteration gets rid of the local optimal. Yet, this is obviously not the time for switching. Thus, a criterion is added before formula (11) to determine whether the local optimal is jumped out, which is expressed aswhere α can be valued as the positive integer between 2 and 10. If the algorithm gets rid of local optimization, the rate of change of fitness function will increase sharply, which is obviously greater than the last slope. Equation (12) is taken as the first criterion. Then, equation (11) is used for discrimination. When both are satisfied, the system switches to an accurate search mode based on system clustering; otherwise, the extensive search will continue.

5.2. Algorithm Termination and Reverse Switching Conditions

The condition of moving from extensive search to precise search is discussed above. When the particle switches to precise search, local or global optimization is searched. The algorithm is expected to jump out of the local optimum or converge to the global optimum. When the particle searches for the local optimum and finds the optimal value, the fitness function no longer reduces but is not the global optimum. At this point, the fitness function is just an auxiliary discrimination condition, and a new discrimination condition is required.

In addition to the fitness function, the state of all the particles is changing, i.e., the location information. The conventional approach is to end the algorithm when all the particles converge to a point, i.e., the coordinates of all the particles are almost identical. Whether it converges to global optimum is hard to ensure. Therefore, this paper constructs the discriminant condition according to the dynamic position information of the particle and realizes the switching of system state.

The discrimination is still constructed with double discriminant condition, and the fitness function of the particle is no longer decreasing no matter whether the particle converges to global or local optimal. The corresponding slope is 0, suggesting that the first discrimination condition is

If the condition of equation (13) is not satisfied, the system will continue to perform the exact search. Otherwise, the next step will be determined.

When the condition of equation (13) is satisfied, the particles are classified again based on information entropy according to the previous grouping method of system clustering and information entropy, and they are classified but not searched. The sum of distance of each group is defined as SDp, which is expressed as

Set the minimum sum of distance as based on the grouping of using the information entropy method. Set the maximum sum of distance as based on the groups of following the hierarchical clustering method. These are written as follows:

The second discrimination condition is

When the distance and the closest grouping based on the information entropy method is not greater than the broadest grouping based on the system clustering method, it can be judged that the particle has converged to the global optimal value and the algorithm ends.

The criterion of equation (13) is to show that all particles have been clustered in a certain region. At this point, search again only seeks the optimum in this region, and the search strategy no longer changes. A precise search strategy is used until all particles are clustered at a point, which is then convergent. This method can achieve autonomous convergence and improve the adaptability of the method.

6. Flow of Adaptive Holonic Particle Swarm Optimization

To sum up, the algorithm flow of Adaptive Holonic Particle Swarm Optimization (AHPSO) designed and optimized in this section is illustrated in Figure 5.

The above algorithm flow can be summarized as the following steps:Step 1: initialize system parameters and set the number of population, inertial coefficient, weight coefficient, group number, system layer number, and other parameters.Step 2: use the information entropy method to group the particles, implement a comprehensive search strategy, and calculate the fitness function and slope.Step 3: discriminate using equations (11) and (12). Step 2 is performed when one or all of the two conditions are not met. Step 4 is performed when both conditions are met.Step 4: group the particles by system clustering.Step 5: perform the exact search strategy and calculate the fitness function and slope.Step 6: determine whether the slope of the fitness function is 0. If it is not 0, go back to Step 5. Otherwise, perform Step 7.Step 7: keep the particles still, group the particles by two methods, calculate SDpemin and SDpsmax, and determine the relative size of the two. When SDpemin is larger than SDpsmax, go back to Step 2. When SDpemin is smaller than SDpsmax, the search strategy does not switch but searches precisely until all particles converge to a point.Step 8: obtain the optimal value coordinate, i.e., the optimal solution, and the algorithm ends.

Step 3 and Step 7 in the above process switch the system state and change the system search strategy adaptively according to the actual state of particles. More importantly, the algorithm can self-converge without presetting the number of iterations, thereby improving its applicability.

7. Verification of Simulation

To verify the effectiveness of the proposed algorithm and guarantee the objectivity of algorithm comparison, 22 standard test functions in the appendix of literature [1] are adopted. In addition, the subsequent simulation parameters in this study are consistent with those in [1]. In other words, each algorithm runs 30 times, the population size is 100, the particles are divided into 5 groups, holonic has 3 layers, r is randomly evaluated in [0,1], the value of is about one and gradually decreases with the rise of the number of iterations, c = 1.3667, the algorithm is iterated 106 times, and dimension of the test function is D = 30. In the optimized grouping strategy, j = 1, ω = 0.8, and α = 3. Also, the mean, standard deviation (Std), and median are selected for comparison. Since this study is based on literature [1], it directly refers to the simulation results in literature [1], namely, the optimization results of HPSO on 22 groups of standard test functions, and the optimal value of the algorithm corresponding to each group of test functions.

The simulations were employed in a Lenovo personal computer with Core™ i7-4960 CPU operating under a clock frequency of 2.60 GHz and 16 GB of memory.

In this study, two groups of simulation experiments are conducted. The first group is a comparison of the four new search mode improved PSOs formed by the combination of the two grouping methods discussed above and the search strategy arrangement and various improved PSO algorithms. The second group is a comparison between the AHPSO algorithm and the HPSO algorithm.

7.1. Comparison between the Combinations of Grouping Method and Searching Strategy

This paper proposes the grouping method based on information entropy and system clustering. At the same time, there are two strategies in the HPSO algorithm: extensive search and accurate search. By permutation and combination, four combination schemes can be formed. Although the performance of the four schemes is analyzed qualitatively, the performance of the four algorithms needs to be verified quantitatively.

For the clear expression, the four search strategies are briefly described. If the first layer is expressed as the information entropy method + extensive search, the second layer is expressed as the system clustering method + precise search mode, briefly expressed as E-S-HPSO. Likewise, the other three are E-E-HPSO, S-E-HPSO, and S-S-HPSO. These four algorithms are optimized with the HPSO on the test function of D = 30, since only the improvement effect of the grouping strategy is explained. Therefore, the number of iterations is 106, and the proportion of search strategy is 80–20%.

At the same time, referring to the parameters of the literature [1], compared with the existing methods, respectively, Local PSO (LPSO) [35], Hierarchical PSO (PS2O) [36], Multi-Population Cooperative PSO (MCPSO) [37], Competitive and Cooperative PSO with Information Sharing Mechanism (CCPSO-ISM) [38], and Human-Brain Simulated PSO (HSPSO) [39], Fully Informed PSO (FIPSO) [40], and Fitness-Distance-Ratio Based PSO (FDRPSO) [41]. The mean, Std, and median after 30 Monte Carlo simulations were obtained. The results are listed in Tables 2 and 3.


Test functionPerformanceBestHPSOE-S-HPSOS-E-HPSOE-E-HPSOS-S-HPSO

F1Mean0E0000E0000E0000E0000E0000E000
Std0E0000E0000E0000E0000E0000E000
Median0E0000E0000E0000E0000E0000E000

F2Mean0E0000E0000E0000E0000E0000E000
Std0E0000E0000E0000E0000E0000E000
Median0E0000E0000E0000E0000E0000E000

F3Mean9.63559E − 99.63559E − 90E0006.96509E + 31.1175E02.97874E + 1
Std4.59479E − 84.59479E − 80E0005.56447E + 31.641275E08.35595E + 1
Median1.31414E − 161.31414E − 160E0006.79982E − 31.121311E02.7192E + 1

F4Mean4.07853E + 64.07853E + 63.808925E + 61.452359E + 74.361641E + 75.031646E + 6
Std4.42817E + 64.42817E + 63.506181E + 63.66862E + 63.66862E + 65.17233E + 6
Median5.28766E + 65.28766E + 63.865954E + 61.407532E + 64.302474E + 75.102473E + 6

F5Mean6.64778E + 36.64778E + 37.254042E + 39.2049605E + 37.409303E + 38.120749E + 3
Std1.34526E + 31.34526E + 36.305487E + 39.124144E + 37.417192E + 38.071724E + 3
Median6.64302E + 36.64302E + 37.611413E + 39.2794509E + 37.442609E + 38.1289E + 3

F6Mean3.662076E03.662076E03.25191E07.11516E + 13.192067E + 13.00596E + 1
Std4.348042E04.348042E03.754928E01.02242E + 23.228586E + 11.64801E + 1
Median0.240781E02.40781E − 12.393705E − 16.99444E + 22.759627E + 12.614366E + 1

F7Mean6.2172E − 152.132E − 77.254042E − 79.2049605E − 37.409303E − 58.120749E − 5
Std0E0003.8009E − 76.305487E − 79.124144E − 37.417192E − 58.071724E − 5
Median6.2172E − 156.2172E − 157.611413E − 149.2794509E − 37.442609E − 58.1289E − 5

F8Mean0E0007.4E − 41.093266E − 97.73591E − 16.849328E − 23.7138E − 2
Std0E0002.257E − 31.109273E − 81.2141175.326303E − 22.2558E − 2
Median0E0000E0001.050083E − 98.16841E − 16.3397E − 24.27247E − 2


Test functionPerformanceLPSOMCPSOPS2OHSPSOCCPSO-ISMFDR-PSOFIPPSO

F1Mean1.60014E − 346.26379E − 811.76947E01.12945E − 85.07323E − 470E0002.93494E − 30
Std5.93931E − 342.49518E − 806.75503E04.39952E − 82.35810E − 460E0003.33407E − 30
Median2.86022E − 359.93019E − 820E0001.01662E − 92.45805E − 470E0002.13543E − 30

F2Mean3.27043E − 609.74950E − 890E0001.787335E − 811.70045E − 1100E0001.17408E − 68
Std4.41014E − 596.40710E − 880E0001.66665E − 812.08581E − 1100E0004.13610E − 68
Median4.01844E − 621.08863E − 900E0001.84299E − 821.222593E − 1120E0005.77086E − 69

F3Mean1.70201E01.79866E + 43.27678E + 33.40681E + 33.16500E31.76578E23.44002E0
Std3.26814E03.99189E + 32.38302E + 32.93343E + 31.32869E35.70738E11.38666E0
Median1.12323E01.21027E + 42.29597E + 34.33609E + 36.32184E31.13972E23.60090E0

F4Mean5.45540E + 067.23631E + 61.39998E + 78.68470E + 69.18903E64.38271E62.18793E7
Std5.11632E + 075.93250E + 63.16866E + 72.18534E + 61.40322E62.03391E65.19729E6
Median3.97467E + 064.01336E + 62.74837E + 75.41334E + 68.99822E64.50993E62.14160E7

F5Mean7.51895E + 039.42133E + 31.01102E + 47.89266E + 36.00111E37.32004E31.55529E4
Std5.16267E + 033.00486E + 31.92308E + 47.51229E + 31.29366E36.48941E21.09525E3
Median3.2567E + 023.73071E + 21.729841E + 47.78256E + 36.45066E37.60076E31.73125E3

F6Mean1.63600E + 18.12548E + 16.58557E + 11.223063E01.12802E + 12.05885E + 12.68981E + 1
Std2.5058E − 11.7968887E + 21.37060E + 21.83256E01.29335E + 11.75849E06.6699E − 1
Median1.58943E + 12.10025E + 17.49879E − 161.28955E + 11.13015E + 12.04516E + 12.86414E + 1

F7Mean6.2172E01.51248E + 12.10483E02.01931E − 98.65697E − 122.28853E04.83970E − 4
Std0E0001.12796E + 14.67083E04.35639E − 74.43108E − 110.57968E05.80258E − 4
Median6.2172E01.86962E + 15.79893E − 158.66757E − 82.70069E − 132.47471E02.18214E − 4

F8Mean6.12511E − 37.59568E − 34.98476E02.35479E − 80E0001.46257E − 22.50855E − 13
Std5.48186E − 36.57201E − 31.93703E + 17.27168E − 80E0001.55388E − 29.82964E − 13
Median0E0000E0002.90790E − 33.69214E − 80E0007.37120E − 30E000

Comparing the results of Tables 2 and 3, it can be seen that the results of E-S-HPSO are slightly stronger than those of HPSO. This is because only the grouping strategy of particles is changed and the search method of particles is not changed. The performance of F5 and F7 did not surpass that of HPSO. Therefore, only from the results, the E-S-HPSO algorithm did not significantly improve the accuracy of the algorithm.

Overall, the E-S-HPSO algorithm performs better and is significantly better than other improved algorithms.

The results of comparison of functions from F9 to F16 are listed in Tables 4 and 5.


Test functionPerformanceBestHPSOE-S-HPSOS-E-HPSOE-E-HPSOS-S-HPSO

F9Mean0E0001.39034E + 11.38478E + 017.63094E + 012.01554E + 013.49364E + 01
Std0E0003.83982E03.44969E + 007.17534E + 018.69272E + 008.82999E + 00
Median0E0001.42459E + 11.40510E + 017.37974E + 012.36655E + 013.04197E + 01

F10Mean9.3786E − 14.15448E + 14.14240E + 011.78288E + 024.59246E + 016.24880E + 01
Std9.7714E − 21.30013E + 11.27993E + 015.17995E + 011.94761E + 012.29971E + 01
Median9.87329E − 12.07210E + 12.06728E + 011.74495E + 024.09533E + 015.60650E + 01

F11Mean2.08050E + 12.08050E + 12.08456E + 011.09322E + 023.55660E + 012.46454E + 01
Std8.38112E − 28.38112E − 25.48504E − 019.14125E + 012.59134E + 003.05622E + 00
Median2.07210E + 12.07210E + 12.11089E + 011.00339E + 023.23142E + 012.58546E + 01

F12Mean0E0004.15771E + 14.12895E + 012.51448E + 024.71632E + 014.25551E + 01
Std0E0006.66764E06.63775E + 003.28548E + 019.63942E + 002.35036E + 01
Median0E0004.38653E + 14.37479E + 012.37401E + 025.12106E + 014.03213E + 01

F13Mean5.02454E + 15.02454E + 15.00688E + 011.18218E + 025.53378E + 017.03207E + 01
Std4.77167E04.77167E04.36107E + 003.84270E + 011.90461E + 011.04847E + 01
Median4.9748E + 14.9748E + 14.97403E + 012.18707E + 016.35530E + 016.08155E + 01

F14Mean2.039144E + 12.039144E + 12.03699E + 013.10676E + 012.11816E + 013.42132E + 01
Std2.77772E02.77772E02.69322E + 001.81535E + 011.38456E + 011.62269E + 01
Median1.99736E + 11.99736E + 11.96490E + 012.93910E + 012.40104E + 012.46657E + 01

F15Mean6.0258E06.0258E06.21510E + 008.39310E + 011.23683E + 013.16915E + 01
Std2.6258E02.6258E03.03159E + 005.41295E + 011.08439E + 012.19687E + 01
Median6.8994E06.8994E07.16581E + 007.72715E + 012.10405E + 011.87566E − 01

F16Mean1.15262E + 11.15262E + 11.12527E + 011.00618E + 021.77924E + 011.72539E + 01
Std2.05534E − 12.05534E − 15.73736E − 023.36218E + 011.49513E + 003.05312E + 00
Median1.15263E + 11.15263E + 11.11540E + 018.14009E + 011.60481E + 011.59870E + 01


Test functionPerformanceLPSOMCPSOPS2OHSPSOCCPSO-ISMFDR-PSOFIPPSO

F9Mean1.51861E + 19.14724E + 18.93109E + 15.48581E − 50E0001.14700E + 14.37977E + 1
Std2.11233 E03.16411E + 12.78157E + 19.60707E − 50E0003.48441E01.30299E + 1
Median1.42608E + 11.02688E + 27.82676E + 14.08772E − 50E0001.4954E + 14.53087E + 1

F10Mean5.77869E + 11.04384E + 29.43819E + 14.63460E + 18.12661E − 13.29247E04.04253E + 1
Std2.51335E + 18.82423E + 17.60825E + 11.56922E + 19.1512E − 22.6840E − 11.00620E + 1
Median5.49712E + 18.06500E + 19.38245E + 14.52979E + 19.2231E − 13.14882E04.30543E + 1

F11Mean2.52855E + 12.64517E + 12.30206E + 12.88386E + 12.47033E + 12.9646E + 12.60013E + 1
Std1.19867E − 24.00292E − 28.27323E − 26.71221E − 37.56920E − 20.00326E − 25.35043E − 2
Median2.59809E + 12.61515E + 12.66347E + 12.57792E + 12.62502E + 12.37816E + 12.20501E + 1

F12Mean4.54933E + 16.47593E + 15.76266E + 14.63969E + 10E0004.45944E + 15.89902E + 1
Std1.64122E + 15.81843E01.26950E + 11.55139E + 10E0001.72914E + 11.97771E + 1
Median2.51570E + 16.39535E + 16.48912E + 14.56778E + 10E0004.08652E + 15.92656E + 1

F13Mean2.77699E + 21.03733E21.24333E + 21.95113E + 21.32693E + 25.97127E + 11.69809E + 2
Std2.56696E + 13.96304E + 13.59524E + 12.09328E + 17.97868E01.32038E + 12.86377E0
Median2.82621E + 21.17977E21.09972E + 21.60887E + 21.15773E + 25.86959E + 11.85220E + 2

F14Mean2.84475E + 12.76923E + 13.21626E + 12.09334E + 12.59571E + 12.26436E + 13.56754E + 1
Std3.08439E03.90532E06.36128E02.52636E02.11251E04.14451E01.20684E0
Median21.93042E + 12.73032E + 13.17851E + 12.17317E + 12.68011E + 11.13267E + 13.00738E + 1

F15Mean8.79677E01.25428E + 12.38048E + 18.78020E07.60963E09.25348E05.95142E + 1
Std6.38876E09.88228E07.10735E03.24385E03.79567E06.44398E01.96684E + 1
Median8.23932E08.94184E02.77004E07.13170E06.39411E09.71773E05.82241E + 1

F16Mean1.69691E01.10219E + 11.22394E + 11.34479E01.89622E + 11.50032E + 11.70448E + 1
Std1.451662E07.17067E − 12.77381E − 11.71356E02.1180E − 12.69315E − 10.18570E − 1
Median1.325485E01.02227E + 11.25632E + 11.85263E + 11.73979E + 11.77383E + 11.09972E + 1

The comparison results in Tables 4 and 5 suggest that the performance of E-S-HPSO algorithm is stronger than that of HPSO except for F10 and F15, whereas there is no change in magnitude. This is because the search strategies for both have not changed, and the results have not established a magnitude advantage.

It can be seen from the comparison that the H-S-HPSO algorithm is superior to other improved algorithms.

Finally, F17 to F22 are compared, and the results are shown in Tables 6 and 7.


Test functionPerformanceBestHPSOE-S-HPSOS-E-HPSOE-E-HPSOS-S-HPSO

F17Mean4.30154E + 25.21984E + 24.52073E + 025.89596E + 025.39387E + 025.30223E + 02
Std7.89852E + 14.07938E + 14.10052E + 016.97003E + 014.13033E + 014.87111E + 01
Median4.41121E + 25.22974E + 24.52021E + 025.90155E + 025.26600E + 025.26257E + 02

F18Mean9.58097E + 19.58097E + 19.54203E + 011.65324E + 021.21691E + 021.00038E + 02
Std1.87319E + 11.87319E + 11.85202E + 012.55312E + 013.32608E + 012.44135E + 01
Median1.03718E + 21.03718E + 21.03673E + 021.29197E + 021.29064E + 021.10376E + 02

F19Mean8.83331E + 18.83331E + 18.81999E + 011.60737E + 029.46153E + 011.13171E + 02
Std3.57758E + 13.57758E + 13.56990E + 016.25591E + 015.23445E + 014.87242E + 01
Median7.30900E + 17.30900E + 17.29495E + 011.57529E + 029.19865E + 011.05173E + 02

F20Mean8.91346E + 29.80514E + 29.20294E + 021.41496E + 039.84232E + 021.01776E + 03
Std2.34994E + 12.29816E + 12.27180E + 011.01034E + 023.76923E + 013.21281E + 01
Median8.88269E + 29.80231E + 29.30002E + 021.34776E + 031.00582E + 031.00974E + 03

F21Mean8.95216E + 29.69132E + 29.98694E + 021.02987E + 031.00845E + 039.71668E + 02
Std1.90048E + 12.52797E + 12.50207E + 014.44542E + 012.48415E + 014.96973E + 01
Median8.93981E + 29.67748E + 29.87276E + 021.04159E + 031.00237E + 031.00512E + 03

F22Mean8.73037E + 28.73037E + 28.72718E + 021.00006E + 039.71034E + 029.12413E + 02
Std1.02568E + 21.02568E + 21.02089E + 021.42273E + 021.31093E + 021.36926E + 02
Median8.35297E + 28.35297E + 28.35177E + 021.01747E + 039.95780E + 029.16719E + 02


Test functionPerformanceLPSOMCPSOPS2OHSPSOCCPSO-ISMFDR-PSOFIPPSO

F17Mean5.42349E + 25.44900E + 25.35993E + 24.30046E + 25.41884E + 25.79451E + 24.75136E + 2
Std9.25305E + 18.28155E + 18.35173E + 17.46879E + 14.88435E + 15.84707E + 18.95450E + 1
Median5.92930E + 25.74521E + 25.75955E + 246.83026E + 25.17943E + 25.96367E + 24.20985E + 2

F18Mean2.02823E + 21.81841E + 24.17035E + 211.46691E + 21.74437E + 24.12716E + 22.23382E + 2
Std1.65636E + 27.39306E + 19.12702E + 12.52999E + 12.51741E + 11.38824E + 11.62433E + 1
Median3.49331E + 21.25868E + 24.89381E + 21.76458E + 21.18682E + 24.88089E + 21.14105E + 2

F19Mean2.36447E + 22.36140E + 23.95316E + 21.30640E + 21.34501E + 23.89656E + 22.76803E + 2
Std4.17344E + 15.93178E + 11.70563E + 27.27530E + 12.55929E + 12.22961E + 27.94407E + 1
Median2.28387E + 21.98377E + 23.97688E + 22.52125E + 21.60227E + 24.06332E + 22.14892E + 2

F20Mean9.57081E + 29.43144E + 21.34425E + 39.54769E + 29.49275E + 29.06064E + 28.96165E + 2
Std1.0362E + 15.34765E + 19.39555E + 13.96348E + 16.17441E + 11.40514E + 12.64296E + 1
Median9.44736E + 29.18328E + 29.69140E + 29.96919E + 29.87829E + 29.44939E + 28.82621E + 2

F21Mean9.48267E + 29.60789E + 21.08716E + 39.04834E + 29.66728E + 29.67694E + 28.29230E + 2
Std1.12561E + 16.62315E + 14.18593E + 11.04182E + 12.56491E + 11.02051E + 11.03564E + 1
Median9.02383E + 29.51875E + 21.12016E + 39.25798E + 29.89399E + 29.47878E + 28.90258E + 2

F22Mean9.30376E + 29.48384E + 21.11354E + 39.66485E + 29.82281E + 29.90088E + 28.03380E + 2
Std1.90752E + 16.28362E + 17.43989E + 11.21847E + 12.82631E + 11.41154E + 11.19903E + 1
Median9.86126E + 29.49257E + 21.49093E + 39.42666E29.22614E + 29.80798E + 28.73846E + 2

Compared with the results in Tables 6 and 7, the performance of the E-S-HPSO is also slightly stronger than that of HPSO, except for F21. The test results of the remaining three grouping methods were not as good as E-S-HPSO and HPSO, and the overall performance of E-E-HPSO is stronger than S-E-HPSO. This is because the initial stage of system E-E-HPSO search range is wider, and the possibility of local optimization is lower. S-S-HPSO is stronger than S-E-HPSO, and the performance of S-E-HPSO is one order of magnitude lower than that of E-S-HPSO.

Comparing Table 6 and Table 7, it can be seen that the H-S-HPSO algorithm is superior to other improved algorithms.

It can be seen that the effects of different grouping methods and search strategy combination on algorithm performance are relatively different.

From the test results of 22 standard functions in the above two groups, it can be quantitatively explained that the performance of E-S-HPSO algorithm is better than that of HPSO. Furthermore, by comparing the results parameters of E-S-HPSO with those of S-S-HPSO and E-E-HPSO, respectively, it can be seen that the effect of the information entropy method in the extensive search phase is better than that of the system clustering method. Likewise, the system clustering method is more suitable for the exact search stage.

The advantage of E-S-HPSO improvement is not the optimization result but the optimization rate. This study uses Figure 6 convergence characteristics and performance for different algorithms on different functions; the horizontal axis represents number of function evaluations and vertical axis represents logarithm of of [1]. denotes the theoretically optimal solution. Comparing the performance of E-S-HPSO and other algorithms, the simulation results are shown in Figure 6.

Figure 6 suggests that E-S-HPSO has a faster convergence speed. In the initial stage, E-S-HPSO enables extensive search to have a larger range, while in the accurate search stage, the search is more accurate, and the algorithm is more efficient, thus improving the performance of the algorithm.

To further compare the improved performance of the algorithm, this paper uses nonparametric statistical comparison algorithm performance. Considering that the optimization results do not necessarily obey the normal distribution, the variance is not the same. Therefore, the results of the 30 optimization results of the E-S-HPSO algorithm in the above six groups of functions and the results of eight algorithms such as HPSO are sequentially performed by the Wilcoxon test. The significance level detection α = 0.05 was set, and the results of obtaining the significance value p are shown in Table 8.


Test functionHPSOLPSOMCPSOPS2OHSPSOCCPSO-ISMFDR-PSOFIPPSO

F110.34850.6560.042180.14190.497610.28003
F60.81470.68740.01470.00970.70580.03780.71340.6597
F130.15760.09760.09570.08540.08900.07920.09590.0682
F160.65570.16590.84910.67680.13570.39220.65550.7722
F180.70600.03180.02760.04620.03710.02350.04860.0317
F190.63870.03810.02760.02950.01860.03140.02440.0216

Through the above comparison, it can be observed that F1, F6, F13, and F16 can be obtained. For a more basic or simple function, the improved algorithm has no obvious improvement in effect. But for complex, difficult to find functions, such as F18 and F19, the effect of this method is significantly improved. The values of p are all less than 0.05, and it is considered that there is a significant improvement.

Comparing Tables 2 to 7, it can be seen that E-S-HPSO has improved the precision optimization effect compared with HPSO, while there is no obvious improvement. Because the algorithm in this paper is mainly to improve the speed of the algorithm, the next section focuses on the speed of the E-S-HPSO.

7.2. Comparison of Performance between AHPSO and HPSO

Through theoretical analysis and simulation verification above, it can be seen that E-S-HPSO has better performance than other four algorithms such as HPSO, which verifies the correctness and advantage of AHPSO’s new grouping strategy. To compare the performance of the algorithm, 22 standard test functions of D = 30 are optimized. First, AHPSO algorithm is used to optimize each test function 30 times, and the optimization result and iteration number of algorithm termination are recorded. Also, the ON parameters of 30 times of the algorithm termination are calculated. The HPSO algorithm is tested in two groups. In the first group, the number of iterations of HPSO algorithm is set to be 106, and the iteration ends when it is carried out ON times. Besides, the optimization result is recorded as Result 1. In the second group, the average number of iterations of HPSO algorithm is set to be ON, and the optimization result is recorded as Result 2. The results of algorithm are shown in Tables 911.


Test functionPerformanceBestHPSOAHPSOONResult 1Result 2

F1Mean0E0000E0000E0005.7151E + 050E0000E000
Std0E0000E0000E0004.8890E + 040E0000E000
Median0E0000E0000E0006.5965E + 050E0000E000

F2Mean0E0000E0000E0007.8449E + 050E0000E000
Std0E0000E0000E0007.0347E + 040E0000E000
Median0E0000E0000E0006.9769E + 050E0000E000

F3Mean9.6356E − 099.6356E − 099.1614E − 097.3266E + 051.0616E + 031.2739E + 03
Std4.5948E − 084.5948E − 085.1488E − 087.2689E + 032.3505E + 032.8205E + 03
Median1.3141E − 161.31414E − 164.7461E − 167.9314E + 056.1560E + 027.3872E + 02

F4Mean4.0785E + 064.0785E + 064.0784E + 068.6078E + 054.0793E + 064.0794E + 06
Std4.4282E + 064.4282E + 064.4281E + 063.0344E + 034.4284E + 064.4284E + 06
Median5.2877E + 065.2877E + 065.2874E + 068.7140E + 055.2885E + 065.2887E + 06

F5Mean6.6478E + 036.6478E + 036.5171E + 037.7902E + 057.4126E + 037.5656E + 03
Std1.3453E + 031.3453E + 038.7452E + 022.9387E + 032.7475E + 033.0280E + 03
Median6.6430E + 036.6430E + 036.5619E + 037.7859E + 058.0654E + 038.3499E + 03

F6Mean3.6620E + 003.6620E + 003.6524E + 007.4180E + 054.9186E + 025.8949E + 02
Std4.3480E + 004.3480E + 004.6404E + 007.8728E + 031.8172E + 022.1720E + 02
Median2.4078E + 042.4078E + 041.1016E − 018.0959E + 051.9629E + 022.3550E + 02

F7Mean6.2172E − 152.1320E − 070E0008.0636E + 051.4193E + 031.7032E + 03
Std0E0003.8009E − 070E0008.8840E + 032.9158E + 023.4990E + 02
Median6.2172E − 156.2172E − 150E0007.7686E + 051.9781E + 022.3737E + 02

F8Mean0E0007.4000E − 045.8711E − 057.0170E + 051.5877E + 031.9052E + 03
Std0E0002.2570E − 036.1175E − 031.1471E + 048.0447E + 029.6536E + 02
Median0E0000E0000E0008.1521E + 056.9662E + 028.3595E + 02


Test functionPerformanceBestHPSOAHPSOONResult 1Result 2

F9Mean0E0001.39034E + 11.3637E + 017.6956E + 051.4707E + 026.0341E + 02
Std0E0003.83982E03.4930E + 001.0933E + 041.7723E + 022.3003E + 02
Median0E0001.42459E + 11.3464E + 017.7908E + 054.0518E + 023.9887E + 02

F10Mean9.3786E − 14.15448E + 13.9882E + 017.3786E + 058.7292E + 026.2453E + 02
Std9.7714E − 21.30013E + 11.1395E + 011.1093E + 048.1637E + 022.6481E + 02
Median9.87329E − 12.07210E + 12.0600E + 017.1851E + 058.1192E + 013.1116E + 02

F11Mean2.08050E + 12.08050E + 12.0006E + 016.8554E + 054.2006E + 026.3790E + 02
Std8.38112E − 28.38112E − 29.6994E − 018.6365E + 035.2696E + 022.6536E + 02
Median2.07210E + 12.07210E + 11.9887E + 017.0725E + 054.3752E + 028.4510E + 02

F12Mean0E0004.15771E + 15.0263E + 017.1762E + 056.9844E + 021.0242E + 03
Std0E0006.66764E05.4117E07.7359E + 026.3464E + 027.3692E + 02
Median0E0004.38653E + 14.3281E + 018.0754E + 053.3585E + 023.8774E + 02

F13Mean5.02454E + 15.02454E + 14.9382E + 017.6301E + 054.8190E + 026.3431E + 02
Std4.77167E04.77167E04.7407E + 001.2141E + 042.0259E + 011.1254E + 02
Median4.9748E + 14.9748E + 14.7780E + 017.4481E + 051.0338E + 039.5606E + 02

F14Mean2.039144E + 12.039144E + 12.0057E + 017.8154E + 051.8756E + 029.0005E + 02
Std2.77772E02.77772E02.5653E + 001.1135E + 041.0899E + 028.2054E + 02
Median1.99736E + 11.99736E + 11.9229E + 017.4034E + 053.9238E + 022.8070E + 02

F15Mean6.0258E06.0258E07.6296E + 007.6277E + 052.0414E + 026.0038E + 02
Std2.6258E02.6258E01.6464E + 006.8493E + 014.9231E + 022.5138E + 01
Median6.8994E06.8994E07.2204E + 007.1540E + 053.4639E + 024.3216E + 02

F16Mean1.15262E + 11.15262E + 19.6229E + 007.4880E + 059.6316E + 023.2425E + 02
Std2.05534E − 12.05534E − 11.6351E + 001.5326E + 049.2054E + 021.6169E + 02
Median1.15263E + 11.15263E + 11.1421E + 017.7511E + 056.4203E + 011.9029E + 02


Test functionPerformanceBestHPSOAHPSOONResult 1Result 2

F17Mean4.30154E + 25.21984E + 25.2112E + 027.9160E + 059.6207E + 028.0948E + 02
Std7.89852E + 14.07938E + 13.9018E + 011.4814E + 045.6794E + 021.3191E + 02
Median4.41121E + 25.22974E + 25.2219E + 028.2430E + 059.8040E + 021.0992E + 03

F18Mean9.58097E + 19.58097E + 11.0427E + 027.9529E + 059.7118E + 027.7917E + 02
Std1.87319E + 11.87319E + 11.7938E + 011.5549E + 035.3678E + 025.6533E + 02
Median1.03718E + 21.03718E + 21.0210E + 028.1079E + 051.0473E + 035.2945E + 02

F19Mean8.83331E + 18.83331E + 19.6823E + 018.1977E + 057.2604E + 027.3278E + 02
Std3.57758E + 13.57758E + 13.5021E + 018.1855E + 039.9347E + 026.8339E + 02
Median7.30900E + 17.30900E + 18.2658E + 017.5919E + 053.1380E + 027.5211E + 02

F20Mean8.91346E + 29.80514E + 29.7893E + 027.8212E + 051.6566E + 031.6163E + 03
Std2.34994E + 12.29816E + 12.1083E + 012.9259E + 033.1205E + 029.6816E + 02
Median8.88269E + 29.80231E + 29.7958E + 028.4515E + 051.6520E + 031.1892E + 03

F21Mean8.95216E + 29.69132E + 29.6779E + 027.8840E + 051.6643E + 031.6784E + 03
Std1.90048E + 12.52797E + 12.4402E + 015.4079E + 039.3272E + 012.6151E + 02
Median8.93981E + 29.67748E + 29.6608E + 028.1902E + 051.2225E + 031.0871E + 03

F22Mean8.73037E + 28.73037E + 28.7150E + 028.4696E + 051.0971E + 031.4803E + 03
Std1.02568E + 21.02568E + 21.0223E + 023.0864E + 037.7040E + 025.5271E + 02
Median8.35297E + 28.35297E + 28.3357E + 027.7373E + 051.6797E + 031.2940E + 03

It is suggested by comparing the above 8 functions that this method can find the optimal value faster than the setting of 106 search times. Under the same number of times, the efficiency of HPSO algorithm is not as good as that of AHPSO algorithm.

The test results of most of the functions in the above experiments are stronger than HPSO, and the performance improvement is related to the improvement of grouping strategy. Although the test results of F12 and F15 are slightly worse than those of HPSO, they are still at the same level. The results of the algorithm are also acceptable.

From the results, the performance of most functions is still better than that of HPSO. Although the performance of F18 and F19 is a little worse, the difference is not large. However, the average number of iterations of AHPSO is 7.8  105, saving more than 20% calculation compared with HPSO. This is because the grouping strategy changed by the algorithm in this study can improve the algorithm speed, and in the meantime, the adaptive state switching can make it easier to get rid of local optimization and decline faster. The adaptive termination can make the algorithm in this study no longer set the number of iterations and expand the applicable scope of the algorithm. Compared with Result 1 and Result 2, the result of the algorithm here is about two orders of magnitude higher than AHPSO. It can be seen that the descent speed of this algorithm improves the applicability of this algorithm.

To clearly compare the performance of algorithms, convergence characteristics and performance for AHPSO and HPSO on different functions are made, as shown in Figure 7.

Figure 7 directly suggests that under the same number of iterations, AHPSO algorithm decreases faster, achieves convergence earlier, and improves the applicable scope of the algorithm. To clearly show the performance of AHPSO and compare with Figure 8 in [1], population distribution at various stages for AHPSO of testing function F7 is made (Figure 8).

Compared with HPSO algorithm requiring 10000 iterations, AHPSO algorithm only needs 73,576 iterations to achieve optimization. This is because the adaptive strategy of AHPSO algorithm is not only a condition to recognize state switching but also can make the particle get rid of local maximum quickly. The faster the algorithm’s fitness function decreases, the faster the optimization efficiency will be.

The complexity of the two algorithms is analyzed and compared. For the complex optimization function, the number of grouping and regrouping of the two algorithms is far less than the number of optimization iteration, and the calculation quantity is negligible. The particle of both algorithms is updated based on equations (1) and (2), that is, the search strategy is the same, and the algorithm complexity of particle state update is the same. In the HPSO algorithm, the number of iterations should be set. After each iteration, the maximum number of iterations should be determined, that is, one addition and one judgment should be made. In AHPSO algorithm, although it is a double judgment condition, if the first judgment condition is not reached, it will return the search. Besides, the calculation is primarily for the first judgment. In the switching from extensive search to precise search, the first criterion is equation (12), i.e., two subtractions, one multiplication, and one judgment are needed. From accurate search to extensive search or algorithm termination, the first discriminant formula is equation (13), which needs a subtraction and a judgment. The computational complexity of the sum of all particles in equations (1) and (2) is much higher than that of AHPSO and HPSO. Although the amount of AHPSO’s discriminant part is slightly higher than that of HPSO, the magnitude of the amount of extra computation has little impact on single iteration, i.e., the amount of calculation of two algorithms is similar in single iteration. The above figure suggests that AHPSO has a faster convergence speed than HPSO, nearly 78% of the number of HPSO iterations, saving 20% or more of the calculation time, which is of great significance to the optimization of multiple complex functions.

As AHPSO adopts a new and different grouping strategy, the algorithm rate is improved, and the switching state can be adaptive, making it easier to jump out of the local optimum. Therefore, the convergence speed of the algorithm is further improved. From the algorithm complexity analysis and simulation verification, it can be seen that the algorithm efficiency is significantly improved.

8. Conclusions

In this study, the HPSO algorithm is briefly reviewed, and the grouping strategy of HPSO algorithm itself is sorted out. The three points of setting the number of iterations and switching time of search strategy can be further studied, so an improved AHPSO algorithm is proposed. The main work is as follows:(1)The average grouping strategy is adopted in the HPSO algorithm, which is fast and easy to implement but has little effect on the performance improvement of the algorithm. Therefore, this study employs the grouping strategy of information entropy method and system clustering method for reference to improve the grouping mode of particles and arrange and combine the grouping mode and search strategy. The advantages of grouping strategy are verified through theoretical analysis and simulation.(2)In the HPSO algorithm, the first 80% of iterations adopt the extensive search strategy, and the last 20% adopt the precise search strategy. This ratio has a larger relationship to the algorithm, and this determined ratio is bound to be impossible to apply to all problems. Accordingly, this study combines the absolute value of slope of particle fitness function to construct the state switching criterion and realize the transition from extensive search to precise search.(3)HPSO searching strategy can switch only once. If the particle is trapped in the local optimal solution before switching or is not close to the optimal solution, the efficiency of the algorithm after switching must be affected. In this study, by combining the absolute value of the slope of the fitness function and the state information after the regrouping of particles, double discriminant conditions are constructed to realize reverse switching or adaptive termination of the search strategy and form closed loop feedback. As a result, the performance of the algorithm is improved.(4)The concept of AHPSO almost differs from that of HPSO. HPSO algorithm should set the number of iterations and set the ratio of different search modes according to the number of iterations, while AHPSO constructs the condition of switching between different states. After the AHPSO starts, the algorithm can be switched to the exact search, and then the reverse switch of the search strategy can be realized, or the algorithm adaptively terminates. The efficiency of the algorithm is improved by closed loop, and the ratio between the number of iterations and the search mode is not required to be preset. Thus, the applicability and efficiency of the algorithm are improved.

Data Availability

The data used to support the findings of this study are included within the article.

Conflicts of Interest

The authors declare that they have no conflicts of interest.

Acknowledgments

This work was supported in part by the National Natural Science Foundation of China (Grant nos. 61502522, 61502523, 61871397, and 61501505), Equipment Pre-Research Field Fund (Grant no. JZX7Y20190253036101), Equipment Pre-Research Ministry of Education Joint Fund (Grant no. 6141A02033703), and Hubei Provincial Natural Science Foundation (Grant no. 2019CFC897).

References

  1. M. Roshanzamir, M. A. Balafar, and S. N. Razavi, “Empowering particle swarm optimization algorithm using multi agents’ capability: a holonic approach,” Knowledge-Based Systems, vol. 136, pp. 58–74, 2017. View at: Publisher Site | Google Scholar
  2. R. Cheng, M. Li, K. Li, and X. Yao, “Evolutionary multiobjective optimization based multimodal optimization: fitness landscape approximation and peak detection,” IEEE Transactions on Evolutionary Computation, vol. 22, no. 5, pp. 692–706, 2018. View at: Publisher Site | Google Scholar
  3. S. Saremi, S. Mirjalili, A. Lewis, A. W. C. Liew, and J. S. Dong, “Enhanced multi-objective particle swarm optimisation for estimating hand postures,” Knowledge-Based Systems, vol. 158, pp. 175–195, 2018. View at: Publisher Site | Google Scholar
  4. A. Ponsich, A. L. Jaimes, and C. A. C. Coello, “A survey on multiobjective evolutionary algorithms for the solution of the portfolio optimization problem and other finance and economics applications,” IEEE Transactions on Evolutionary Computation, vol. 17, no. 3, pp. 321–344, 2013. View at: Publisher Site | Google Scholar
  5. J. Ding, C. Yang, Y. Jin, and T. Chai, “Generalized multi-tasking for evolutionary optimization of expensive problems,” IEEE Transactions on Evolutionary Computation, vol. 23, no. 1, pp. 44–58, 2017. View at: Publisher Site | Google Scholar
  6. H. Wu, C. Nie, F.-C. Kuo, H. Leung, and C. J. Colbourn, “A discrete particle swarm optimization for covering array generation,” IEEE Transactions on Evolutionary Computation, vol. 19, no. 4, pp. 575–591, 2015. View at: Publisher Site | Google Scholar
  7. W. Hu and G. G. Yen, “Adaptive multiobjective particle swarm optimization based on parallel cell coordinate system,” IEEE Transactions on Evolutionary Computation, vol. 19, no. 1, pp. 1–18, 2015. View at: Publisher Site | Google Scholar
  8. G. Bramerdorfer, J. A. Tapia, J. J. Pyrhonen, and A. Cavagnino, “Modern electrical machine design optimization: techniques, trends, and best practices,” IEEE Transactions on Industrial Electronics, vol. 65, no. 10, pp. 7672–7684, 2018. View at: Publisher Site | Google Scholar
  9. J. H. Lee, J.-Y. Song, D.-W. Kim, J.-W. Kim, Y.-J. Kim, and S.-Y. Jung, “Particle swarm optimization algorithm with intelligent particle number control for optimal design of electric machines,” IEEE Transactions on Industrial Electronics, vol. 65, no. 2, pp. 1791–1798, 2018. View at: Publisher Site | Google Scholar
  10. M. Collotta, G. Pau, and V. Maniscalco, “A fuzzy logic approach by using particle swarm optimization for effective energy management in IWSNs,” IEEE Transactions on Industrial Electronics, vol. 64, no. 12, pp. 9496–9506, 2017. View at: Publisher Site | Google Scholar
  11. Z. Xue, H. Li, Y. Zhou, N. Ren, and W. Wen, “Analytical prediction and optimization of cogging torque in surface-mounted permanent magnet machines with modified particle swarm optimization,” IEEE Transactions on Industrial Electronics, vol. 64, no. 12, pp. 9795–9805, 2017. View at: Publisher Site | Google Scholar
  12. G. Lin and J. Guan, “Solving maximum set k-covering problem by an adaptive binary particle swarm optimization method,” Knowledge-Based Systems, vol. 142, pp. 95–107, 2018. View at: Publisher Site | Google Scholar
  13. G. Lin and J. Guan, “A hybrid binary particle swarm optimization for the obnoxious p -median problem,” Information Sciences, vol. 425, pp. 1–17, 2018. View at: Publisher Site | Google Scholar
  14. J. Liu, Y. Mei, and X. Li, “An analysis of the inertia weight parameter for binary particle swarm optimization,” IEEE Transactions on Evolutionary Computation, vol. 20, no. 5, pp. 666–681, 2016. View at: Publisher Site | Google Scholar
  15. M. Hu, T. Wu, and J. D. Weir, “An adaptive particle swarm optimization with multiple adaptive methods,” IEEE Transactions on Evolutionary Computation, vol. 17, no. 5, pp. 702–720, 2013. View at: Publisher Site | Google Scholar
  16. T. Bai, Y.-b. Kan, J.-x. Chang, Q. Huang, and F.-J. Chang, “Fusing feasible search space into PSO for multi-objective cascade reservoir optimization,” Applied Soft Computing, vol. 51, pp. 328–340, 2017. View at: Publisher Site | Google Scholar
  17. T. X. Pham, P. Siarry, and H. Oulhadj, “Integrating fuzzy entropy clustering with an improved PSO for MRI brain image segmentation,” Applied Soft Computing, vol. 65, pp. 230–242, 2018. View at: Publisher Site | Google Scholar
  18. Y. Yan, R. Zhang, J. Wang, and J. Li, “Modified PSO algorithms with “request and reset” for leak source localization using multiple robots,” Neurocomputing, vol. 292, pp. 82–90, 2018. View at: Publisher Site | Google Scholar
  19. A. R. Jordehi, “Time varying acceleration coefficients particle swarm optimisation (TVACPSO): a new optimisation algorithm for estimating parameters of PV cells and modules,” Energy Conversion and Management, vol. 129, pp. 262–274, 2016. View at: Publisher Site | Google Scholar
  20. K. Chen, F. Zhou, and A. Liu, “Chaotic dynamic weight particle swarm optimization for numerical function optimization,” Knowledge-Based Systems, vol. 139, pp. 23–40, 2018. View at: Publisher Site | Google Scholar
  21. N. Ghorbani, A. Kasaeian, A. Toopshekan, L. Bahrami, and A. Maghami, “Optimizing a hybrid wind-PV-battery system using GA-PSO and MOPSO for reducing cost and increasing reliability,” Energy, vol. 154, pp. 581–591, 2018. View at: Publisher Site | Google Scholar
  22. S. Goudarzi, W. H. Hassan, M. H. Anisi et al., “ABC-PSO for vertical handover in heterogeneous wireless networks,” Neurocomputing, vol. 256, pp. 63–81, 2017. View at: Publisher Site | Google Scholar
  23. M. S. Ran and Z. Mesut, “A recombination-based hybridization of particle swarm optimization and artificial bee colony algorithm for continuous optimization problems,” Applied Soft Computing, vol. 13, no. 4, pp. 2188–2203, 2013. View at: Publisher Site | Google Scholar
  24. Y. Gupta and A. Saini, “A novel fuzzy-PSO term weighting automatic query expansion approach using combined semantic filtering,” Knowledge-Based Systems, vol. 136, pp. 97–120, 2017. View at: Publisher Site | Google Scholar
  25. Z. Liu, C. Mao, J. Luo, Y. Zhang, and C. L. Philip Chen, “A three-domain fuzzy wavelet network filter using fuzzy PSO for robotic assisted minimally invasive surgery,” Knowledge-Based Systems, vol. 66, pp. 13–27, 2014. View at: Publisher Site | Google Scholar
  26. F. Jiang, H. Xia, Q. Anh Tran, Q. Minh Ha, N. Quang Tran, and J. Hu, “A new binary hybrid particle swarm optimization with wavelet mutation,” Knowledge-Based Systems, vol. 130, pp. 90–101, 2017. View at: Publisher Site | Google Scholar
  27. K. Chen, F. Zhou, L. Yin, S. Wang, Y. Wang, and F. Wan, “A hybrid particle swarm optimizer with sine cosine acceleration coefficients,” Information Sciences, vol. 422, pp. 218–241, 2018. View at: Publisher Site | Google Scholar
  28. M. Issa, A. E. Hassanien, D. Oliva, A. Helmi, I. Ziedan, and A. Alzohairy, “ASCA-PSO: adaptive sine cosine optimization algorithm integrated with particle swarm for pairwise local sequence alignment,” Expert Systems With Applications, vol. 99, pp. 56–70, 2018. View at: Publisher Site | Google Scholar
  29. Z.-H. Zhan, J. Zhang, Y. Li, and Y.-H. Shi, “Orthogonal learning particle swarm optimization,” IEEE Transactions on Evolutionary Computation, vol. 15, no. 6, pp. 832–847, 2011. View at: Publisher Site | Google Scholar
  30. Y.-W Leung and Y. Wang, “An orthogonal genetic algorithm with quantization for global numerical optimization,” IEEE Transactions on Evolutionary Computation, vol. 5, no. 1, pp. 41–53, 2001. View at: Publisher Site | Google Scholar
  31. W. F. Gao, S.-Y. Liu, and L. L Huang, “A novel artificial bee colony algorithm based on modified search equation and orthogonal learning,” IEEE Transactions on Cybernetics, vol. 43, no. 3, pp. 1011–1024, 2013. View at: Publisher Site | Google Scholar
  32. L. T. Al-Bahrani and J. C. Patra, “A novel orthogonal PSO algorithm based on orthogonal diagonalization,” Swarm and Evolutionary Computation, vol. 40, pp. 1–23, 2018. View at: Publisher Site | Google Scholar
  33. P. Liu and J. Liu, “Multi-leader PSO (MLPSO): a new PSO variant for solving global optimization problems,” Applied Soft Computing, vol. 61, pp. 256–263, 2017. View at: Publisher Site | Google Scholar
  34. T. Y. Tan, L. Zhang, S. C. Neoh, and C. P. Lim, “Intelligent skin cancer detection using enhanced particle swarm optimization,” Knowledge-Based Systems, vol. 158, pp. 118–135, 2018. View at: Publisher Site | Google Scholar
  35. P. N. Suganthan, “Particle swarm optimiser with neighbourhood operator,” in Proceedings of the 1999 Congress on Evolutionary Computation-CEC99 (Cat. No. 99TH8406), vol. 3, pp. 1958–1962, IEEE, Washington, DC, USA, July 1999. View at: Google Scholar
  36. H. Chen and Y. Zhu, “Optimization based on symbiotic multi-species coevolution,” Applied Mathematics and Computation, vol. 205, no. 1, pp. 47–60, 2008. View at: Publisher Site | Google Scholar
  37. B. Niu, Y. Zhu, and X. He, “Multi-population cooperative particle swarm optimization,” in Proceedings of the European Conference on Artificial Life, pp. 874–883, Springer, Canterbury, UK, September 2005. View at: Google Scholar
  38. Y. Li, Z.-H. Zhan, S. Lin, J. Zhang, and X. Luo, “Competitive and cooperative particle swarm optimization with information sharing mechanism for global optimization problems,” Information Sciences, vol. 293, pp. 370–382, 2015. View at: Publisher Site | Google Scholar
  39. R.-L. Tang and Y.-J. Fang, “Modification of particle swarm optimization with human simulated property,” Neurocomputing, vol. 153, pp. 319–331, 2015. View at: Publisher Site | Google Scholar
  40. R. Mendes, J. Kennedy, and J. Neves, “The fully informed particle swarm: simpler, maybe better,” IEEE Transactions on Evolutionary Computation, vol. 8, no. 3, pp. 204–210, 2004. View at: Publisher Site | Google Scholar
  41. T. Peram, K. Veeramachaneni, and C. K. Mohan, “Fitness-distance-ratio based particle swarm optimization,” in Proceedings of the 2003 IEEE Swarm Intelligence Symposium SIS’03 (Cat. No. 03EX706), pp. 174–181, IEEE, Indianapolis, IN, USA, April 2003. View at: Google Scholar

Copyright © 2019 Hao Li et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

259 Views | 109 Downloads | 0 Citations
 PDF  Download Citation  Citation
 Download other formatsMore
 Order printed copiesOrder
 Sign up for content alertsSign up