Abstract

Particle swarm optimization (PSO) and fireworks algorithm (FWA) are two recently developed optimization methods which have been applied in various areas due to their simplicity and efficiency. However, when being applied to high-dimensional optimization problems, PSO algorithm may be trapped in the local optima owing to the lack of powerful global exploration capability, and fireworks algorithm is difficult to converge in some cases because of its relatively low local exploitation efficiency for noncore fireworks. In this paper, a hybrid algorithm called PS-FW is presented, in which the modified operators of FWA are embedded into the solving process of PSO. In the iteration process, the abandonment and supplement mechanism is adopted to balance the exploration and exploitation ability of PS-FW, and the modified explosion operator and the novel mutation operator are proposed to speed up the global convergence and to avoid prematurity. To verify the performance of the proposed PS-FW algorithm, 22 high-dimensional benchmark functions have been employed, and it is compared with PSO, FWA, stdPSO, CPSO, CLPSO, FIPS, Frankenstein, and ALWPSO algorithms. Results show that the PS-FW algorithm is an efficient, robust, and fast converging optimization method for solving global optimization problems.

1. Introduction

Global optimization problems are common in engineering and other related fields [13], and it is usually difficult to solve the global optimization problems due to many local optima and complex search space, especially in high dimensions. For solving optimization problems, many methods have been reported in the past few years. Recently, the stochastic optimization algorithms have attracted increasing attention because they can get better solutions without any properties of the objective functions. Therefore, many effective metaheuristic algorithms have been presented, such as simulated annealing (SA) [4], differential evolution (DE) [5], genetic algorithm (GA) [6], particle swarm optimization (PSO) [7], ant colony optimization (ACO) [8], artificial bee colony (ABC) [9], and fireworks algorithm (FWA) [10].

Among these intelligent algorithms, the PSO and FWA have shown pretty outstanding performance in solving global optimization problems in the last several years. PSO algorithm is a population-based algorithm originally proposed by Kennedy and Eberhart [7], which is inspired by the foraging behavior of birds. Fireworks algorithm is a new swarm intelligence algorithm that is motivated by observing fireworks explosion. Owing to the less decision parameters, simple implementation, and good scalability, PSO and FWA have been widely applied since they were proposed, including shunting schedule optimization of electric multiple units depot [11], optimal operation of trunk natural gas pipelines [12], location optimization of logistics distribution center [13], artificial neural networks design [14], warehouse-scheduling [15], fertilization optimization [16], power system reconfiguration [17], and multimodal function optimization [18].

Although PSO and FWA are highly successful in solving some classes of global optimization problems, there are certain problems that need to be addressed when they are extended to handling complex high-dimensional optimization problems. The PSO algorithm has a significant efficiency in unimodal problems, but it can easily be trapped in local optima for multimodal problems. Moreover, the FWA is difficult to converge for the optimization problems which do not have their optimal solutions at the origin. This is because the two algorithms cannot keep the balance between the exploration and exploitation properly. Due to the optimal particle dominating the solving process, the PSO algorithm has inferior swarm diversity in the later stage of iterations and relatively poor exploration ability [19], while the fireworks and sparks in FWA are not well-informed by the whole swarm [20] and the FWA framework lacks the local search efficiency for noncore fireworks [21]. In order to improve the performance of PSO and FWA, a considerable number of modified algorithms have been proposed. For example, Nickabadi et al. presented AIWPSO algorithm, in which a new adaptive inertia weight approach was adopted [22]. By embedding a reverse predictor and adding a repulsive force into the basic algorithm, the RPPSO was developed [23]. Wang and Liu used three strategies to ameliorate the standard algorithm, including best neighbor replacement, abandoned mechanism, and chaotic searching [24]. Souravlias and Parsopoulos introduced a PSO-based variant, which could dynamically assign different computational budget for each particle based on the quality of its neighbor [25]. Based on self-adaption principle and bimodal Gaussian function, the advanced fireworks algorithm (AFWA) was proposed [26]. Liu et al. presented several methods for computing the explosion amplitude and number of sparks [27]. Pei et al. proposed to use the elite point of approximation landscape in the fireworks swarm and discussed the effectiveness of surrogate-assisted FWA [28]. Zheng et al. improved the new explosion operator, mutation operator, selection strategy, and mapping rules of FWA, which led to the formation of enhanced fireworks algorithm (EFWA) [29, 30] and dynamic search in fireworks algorithm (dynFWA) [31]. Zheng et al. proposed the new cooperative FWA framework (CoFFWA), in which the independent selection method and crowdedness-avoiding cooperative strategy were contained [21]. Li et al. investigated the operators of FWA and introduced a novel guiding spark in FWA [32] and proposed the adaptive fireworks algorithm (AFWA) [33] and bare bones fireworks algorithm (BBFWA) [34].

Hybrid algorithms can utilize various exploration and exploitation strategies for high-dimensional multimodal optimization problems, which have gradually become the new research areas. For example, Valdez et al. combined the advantages of PSO with GA and proposed a modified hybrid method [35]. In the new PS-ABC algorithm introduced by Li et al., the global optimum could be obtained by combining the local search phase in PSO with two global search phases in ABC [19]. Pandit et al. presented the SPSO-DE, in which the domain information of PSO and DE was shared with one another to overcome their respective weaknesses [36]. Through changing the generation and selection strategy of explosive spark, Gao and Diao proposed the CA-FWA [37]. Zhang et al. proposed BBO-FW algorithm which improved the interaction ability between fireworks [38]. By combining the FWA with the operators of DE, a novel hybrid optimization algorithm was proposed [20].

In this paper, by utilizing the exploitation ability of PSO and the exploration ability of FWA, a novel hybrid optimization algorithm called PS-FW is proposed. Based on the solving process of PSO algorithm, the operators of FWA are embedded into the update operation of the particle swarm. In the iteration process, in order to promote the balance of exploitation and exploration ability of PS-FW, we presented three major techniques. Firstly, the abandonment and supplement strategy is used, to abandon a certain number of particles with poor quality and to supplement the particle swarm with new individuals generated by FWA. Meanwhile, considering the information exchanges between the optimal firework and its neighbor in each dimension, the method for obtaining the explosion amplitude is designed as adaptive, and the mode of generating the explosion sparks is modified by combing the greedy algorithm. Furthermore, the conventional Gaussian mutation operator is abandoned, and the novel mutation operator based on the thought of the social cognition and learning is proposed. The performance of PS-FW is compared with several existing optimization algorithms. The experimental results show that the proposed PS-FW is more efficacious in solving the global optimization problems.

The rest of the paper is organized as follows: Section 2 describes the standard PSO and FWA. Section 3 presents the PS-FW algorithm, in which the algorithm details are proposed. Section 4 introduces the simulation results over 22 high-dimensional benchmark functions and the corresponding comparisons between PS-FW and other algorithms are executed. Finally, the conclusion is drawn in Section 5.

2.1. PSO Algorithm

In PSO algorithm, the particles scatter in search space of the optimization problems and each particle denotes a feasible solution. Each particle contains three aspects of information: the current position , the velocity , and the previous best position . Assume that the optimization problem is -dimensional and represents the size of the swarm population; then the position and velocity of th () particle can be denoted as and , respectively, while the previous best position is represented as . Besides, the best position encountered by the entire particles so far is known as current global best position . In each generation, and are updated by the following equations: where and are two learning factors that indicate the influence of the cognitive and social components, and are the random real numbers in interval , respectively, and is the inertia weight which controls the convergence speed of the algorithm.

2.2. Fireworks Algorithm

In FWA, a firework or a spark denotes a potential solution of optimization problems, while the process of producing sparks from fireworks represents a search in the feasible space. As in other optimization algorithms, the optimal solutions are obtained by successive iterations. In each iteration, the sparks can be produced by two ways: the explosion and the Gaussian mutation. The explosion of fireworks is dominated by the explosion amplitude and the number of explosion sparks. Compared to the fireworks with lower fitness, the fireworks with better fitness will have smaller explosion amplitude and more explosion sparks. Suppose that denotes the number of fireworks; then the th () firework can be denoted as for -dimensional optimization problems. Besides, the explosion amplitude can be obtained by (3) and the sparks number can be calculated by (4): where denotes the objective function value of the th firework, , and are the explosion amplitude and the number of explosion sparks of the th firework, respectively, , , and are two constants that dominate the explosion amplitude and the number of explosion sparks, respectively, and is the machine epsilon.

Moreover, the bounds of are defined as follows: where , are two constants that control the minimum and maximum of population size, respectively.

In order to generate each explosion spark of th firework, an offset is added to according to the following equation: where is the th explosion spark of th firework and , where is a -dimensional vector which has values of 1 and values of 0, where denotes the number of randomly selected dimensions of and , , where and are random numbers in the intervals and , respectively.

Another type of sparks known as the Gaussian sparks is generated based on the Gaussian mutation operator. In each generation, a certain number of Gaussian sparks are generated and each Gaussian spark is transformed from a firework which is selected randomly. For the selected firework , its Gaussian spark is generated based onwhere is the th Gaussian spark, is a -dimensional vector whose values are 1 in each dimension, is a -dimensional vector which has values of 1 and values of 0, represents the number of randomly selected dimensions of and , and represents a random number subordinated to the Gaussian distribution with the mean of 1 and the standard deviation of 1.

For the purpose of passing information to the next generation, new fireworks populations are chosen to continue the iteration. All the fireworks, the explosion sparks, and Gaussian sparks have the chance to be selected for the next iteration. The location with best fitness is kept for the next generation, while the other locations are selected based on the selection operator and the selection operator is denoted as follows: where denotes the set comprised of all the original fireworks and both types of sparks, , , and are th, th, and th location of , respectively, is the distance between th location and the rest of all the locations, and denotes the probability of being selected for the th location.

3. Hybrid Optimization Algorithm Based on PSO and FWA

The exploitation process focuses on utilizing the existing information to look for better solutions, whereas the exploration process attaches importance to seek the optimal solutions in the entire space. For PSO, under the guidance of their historical best solutions and the current global best solution, the particles can quickly find better solutions, and the excellent exploitation efficiency of algorithm is shown. In FWA, the fireworks can find the global optimal solution in the whole search space by performing explosion and mutation operations while the outstanding exploration capability of FWA is demonstrated. To utilize the advantages of the two algorithms, a hybrid optimization method (PS-FW) based on PSO and FWA is proposed.

3.1. Feasibility Analysis

The formation of a hybrid algorithm is mainly due to the effective combination of the operators of its composition algorithms in a certain way. To clarify the performance enhancement caused by combining the PSO algorithm with fireworks algorithm, we draw Figures 1 and 2 to illustrate the optimization mechanism. As shown in Figure 1, for standard PSO algorithm, the th particle moves from point 1 to point 4 under the common influence of velocity inertia, self-cognition, and social information. When the operators of FWA are added, the particle is transformed into firework and performs explosion and mutation operations and eventually reaches the position of firework or sparks, such as point 5 shown in Figure 1. By performing the operators of FWA, the particle can explore better solutions in multiple directions and jump out of the local optima region as depicted in Figure 1. Thus we can argue that the operators of FWA improve the global search ability of PSO algorithm. As we know, the searching region is determined by the explosion amplitude and fireworks with poor quality have bigger amplitude, which may lead to an uncomprehensive search without considering the cooperation with other fireworks. When the firework with poor quality generates the explosion sparks and mutation sparks, the new selected location may skip over the global optima region without the attraction from the rest of fireworks and arrive at point 2. By adding the operators of PSO after the th firework updates its location, the information of its own historical best location and current global best location is taken into account; then the new solution is found in point 5, which is shown in Figure 2. Therefore, the operators of PSO could strengthen the local search efficiency of FWA. Based on the above analysis, it is concluded that the combination of PSO and FWA is an effective way to form a superior optimization algorithm.

3.2. The Abandonment and Supplement Mechanism

The particles with their memory ability can be quickly converged to the current optimal solution. However, the aggregation effect of the particle swarm reduces the diversity of the population, which makes the search in the whole feasible space inefficient. In this paper, in order to enhance the balance between exploitation ability and exploration ability of PS-FW, we adopt the abandonment and supplement strategy which includes three main steps. (i) All the particles in the particle swarm are sorted in ascending order. Then the particles with better fitness are retained for the next iteration, and the (satisfying ) particles with lower fitness are abandoned. (ii) The excellent individuals denoted as are used to implement the explosion operator, the mutation operator, and the selection operator. (iii) The new individuals obtained by the operators of FWA are added to the original population, to balance the number of particles and to generate the new particle swarm for the next iteration. The abandonment and supplement strategy not only retains the information of the excellent individuals so that they can participate in the subsequent calculation, but also avoids the individuals with poor quality wasting computing resources. However, the problem arises: how to determine ? For this, through analyzing the process of solving the optimization problems, we should enhance the exploration ability of the algorithm and search the optimal solution in the global scope at early stage of iterations, which means the number of particles executing the operators of FWA should be the majority. In the later stage of iteration, we should focus on searching around the current global optimal solution, so the number of excellent individuals retained in the algorithm should be more. Based on the discussion above, the calculation of in this paper is shown in (9), in which decreases with iteration process. where and are the upper and lower bounds of number of abandoned particles, respectively, is the maximum number of iterations, denotes the current number of iterations, indicates that the values in brackets are rounded, and represents a positive integer.

3.3. Modified Explosion Operator
3.3.1. Adaptive Explosion Amplitude

Based on the analysis above, the definition of the explosion amplitude in standard FWA limits the diversity of the explosion sparks generated by the excellent fireworks, thus decreasing the local search ability of algorithm. In the enhanced fireworks algorithm (EFWA) [29], in order to avoid the weakness of the explosion amplitude generation in FWA, a minimal explosion amplitude check mechanism is proposed, which defines the explosion amplitude less than a certain threshold to obtain the same value as the threshold while the threshold is reducing with the iteration process. Suppose that denotes the threshold of explosion amplitude; then the explosion amplitude less than the threshold is defined as (10) in EFWA. where and are the upper and lower bounds of the explosion amplitude, respectively.

In this paper, based on the minimal explosion amplitude detection mechanism, the basic explosion amplitude of each firework is calculated according to (3), and the explosion amplitude is adjusted by the following two methods.

For the fireworks whose explosion amplitude is greater than the threshold , a control factor of the explosion amplitude is added. The control factor makes the explosion sparks generated by the algorithm have larger search scope in the early stage of iterations, which can effectively enhance the exploration ability of the algorithm. In the later stage of iterations, the explosion amplitude is reduced to improve the search efficiency around the current global optimal solution. The adjustment of the explosion amplitude is shown in (11), and the control factor is calculated as shown in (12). where and are the lower and upper bounds of the control factor, respectively.

When the explosion amplitude of firework is less than the threshold, the optimal firework and its neighbor information are used to determine the explosion amplitude in the hybrid algorithm. Since the PS-FW algorithm is based on the framework of PSO, the position of all individuals will approach the current best position, which leads to the fitness of current optimal individual close to its neighbor individuals. That is to say, if the explosion amplitude of a firework is too small, indicating that the firework may be located near the current best location, therefore, by considering the deviation information of all corresponding dimensions between the current best firework and its neighbor firework, a new explosion amplitude of the firework is generated. The explosion amplitude generation method can adaptively optimize the solving process, which can be interpreted from two aspects. When the algorithm is in the early iteration stage, the position of fireworks is scattered, and the deviation in dimensions between the optimal firework and its neighbor firework is larger, which leads to the larger explosion amplitude and the improved probability of finding the global optimal solution. As the algorithm enters the later iterations, the fireworks gather around the current best location, and the offset of each dimension between the current best firework and its neighbor firework is reduced, which results in the decrement of explosion amplitude and the improvement of the local search ability for PS-FW. There are two main steps to obtain the explosion amplitude. (i) Randomly select a firework around the current optimal firework according to the fitness. (ii) Update the explosion amplitude of the th firework according to the following equation: where denotes the value of the th dimension of current optimal firework.

3.3.2. Modified Explosion Sparks Generation

In FWA, when generating an explosion spark, the offset is only calculated once, which results in the same changes for all the selected dimensions and an ineffective search for different directions. In the PS-FW algorithm proposed in this paper, a new explosion sparks generation method is introduced. Firstly, when generating the explosion sparks, the location offset is performed in all the dimensions of the fireworks instead of randomly selecting part of dimensions. Furthermore, for each dimension of the fireworks, the different offsets are calculated according to (14), thereby increasing the diversity of the explosion sparks and the global search capability of the hybrid algorithm. Meanwhile, suppose that denotes the th firework without a location offset and indicates the th firework whose th dimension adds a offset; then denotes the th firework whose th dimension subtracts an offset. As shown in (15), inspired by greedy algorithm, when the fireworks generate their explosion sparks, the hybrid algorithm determines which offset to be selected based on the value of objective function, which can effectively improve the local search capability of the algorithm and accelerate the convergence. where and are the value and offset of the th dimension of the th explosion spark for the th firework, respectively, represents a random number that follows the standard normal distribution, and are integers in the intervals [] and [], respectively, and indicates the minimum values in parentheses.

Assume that denotes the total number of explosion sparks generated by all fireworks, and represent the lower and upper bounds for the search scope, and and are corresponding to the bounds of th dimension, respectively. Based on the explosion operator introduced in Sections 3.3.1 and 3.3.2, the detailed codes of explosion operator are represented in Algorithm 1.

(1) Input: particles sorted in ascending order according to their fitness.
(2) Initialize the location of fireworks: , .
(3) for to do
(4)  Calculate the explosion amplitude of th firework by using (3).
(5)  Calculate the number of explosion sparks of th firework by using (4).
(6)  Update the number of explosion sparks of th firework by using (5).
(7)  if do
(8)   Update the explosion amplitude of th firework by using (11) and (12).
(9)  else do
(10)  Randomly select a firework around the current optimal firework.
(11)   Update the explosion amplitude of th firework by using (13).
(12)  end if
(13) end for
(14) Initialize the total number of explosion sparks .
(15) for to do
(16)  for to do
(17)   Initialize the location of the th explosion spark: .
(18)    for to do
(19)     Calculate the offset by using (14).
(20)    Update the value of th dimension of th explosion spark by using (15).
(21)     if or do
(22)      Update the by using (17).
(23)     end if
(24)    end for
(25)   
(26)  end for
(27) end for
(28) Output: explosion sparks.
3.4. Novel Mutation Operator

As the Gaussian mutation operator effectively increases the diversity of feasible solutions, the performance of traditional FWA has been significantly improved. However, the numerical experiments show that the combined application of Gaussian operator and mapping operator makes the Gaussian sparks mostly concentrated around the zero point, which is the reason why FWA has the fast convergence speed for the problems with their optimal solutions at zero [31]. In order to improve the adaptability of the algorithm for the nonzero optimization problems and maintain the contribution of the mutation operator to the population diversity, a new mutation operator is proposed in the PS-FW. Compared with the standard FWA, there are two main differences in this paper. (i) In PS-FW, we randomly select a certain number of explosion sparks to generate the mutation sparks instead of using the fireworks. Because the explosion sparks have better quality compared to the fireworks based on (15), the mutation sparks generated by the explosion sparks can effectively enrich the diversity of the population and have better global search ability. (ii) In this paper, the Gaussian random number is no longer used in mutation operator and the interaction mechanism of particles in PSO is used for reference to design the mutation operator. The mutation sparks generated by our mutation operator can not only maintain the better information of the explosion sparks, but also have a proper movement towards the current best location, which leads to promoting the convergence of hybrid algorithm. The proposed mutation operator is shown as follows. where and indicate the value of th dimension of th mutation spark and th explosion spark, respectively, is the current optimal explosion spark, and are the random number in , and denotes the random integer of the interval [, ], , where indicates the total number of mutation sparks.

The detailed codes of mutation operator are represented in Algorithm 2.

(1) Input: explosion sparks and best explosion spark
  
(2) for to do
(3)  Generate a random integer in the interval [, ].
(4)  Initialize the location of the th mutation spark:
    .
(5)  Calculate the number of dimensions to perform
    the mutation: .
(6)  Randomly select dimensions of .
(7)  for each dimension pre-selected dimensions
   of do
(8)   Calculate the value of by using (16).
(9)   if or do
(10)    Update the value of by using (17).
(11)  end if
(12) end for
(13) end for
(14) Output: mutation sparks.
3.5. Main Process of PS-FW

In PS-FW, the algorithm consists of two main stages, which are initialization stage and iterations stage. In the initialization phase, we need to initialize the position and velocity of the particle swarm, as well as to initialize the control parameters. In the iterative phase, the PS-FW algorithm inherits all the parameters and operators of the PSO algorithm, and all particles are used as the main carrier for storing feasible solutions. Firstly, in each iteration, the particles update their speed and position according to the operators of the PSO algorithm and then perform the abandonment and supplement operation. Besides, in the process of generating the supplement particles by using the operators of FWA, we first generate explosion sparks according to the excellent particles and the modified explosion operator; then the fitness of the explosion sparks is given. Secondly, the mutation sparks are generated by the explosion sparks and the novel mutation operator. Finally, the supplement individuals are selected by the combination of elite strategy and roulette strategy. When each iteration is completed, it is judged whether the termination condition is satisfied. If the stopping criterion is matched, the iteration will be stopped and the best solutions are output. Otherwise, the iteration phase will be repeated.

In the procedures above, there are two points to be noted. (i) In the implementation process of the hybrid algorithm, it is necessary to detect whether the position of individuals is within the feasible scope while the individuals consist of particles, fireworks, explosion sparks, and mutation sparks. As shown in (17), if the position of individuals exceeds the feasible scope, it is adjusted by using the mapping criteria in the EFWA algorithm [29]. where indicates the value of the th dimension of the individual and is a random number in .

(ii) The selection strategy of FWA based on the density of feasible solutions is abandoned in the PS-FW algorithm. Although it is possible to maintain the diversity of the population by selecting the location which has fewer individuals around with a larger probability, relatively more time is wasted by calculating the spatial distance between the individuals and the efficiency of the algorithm is reduced. Therefore, a selection strategy based on fitness is applied in PS-FW, which means the elite strategy is used to retain the best individual directly into the next iteration and the remaining locations are selected by the roulette criterion according to the fitness.

According to the description above, the main codes of the PS-FW algorithm are given in Algorithm 3.

(1) Input: Objective function and constraints.
(2) Initialization
(3) Parameters initialization: assign values to , , , , , , , , , , , , , , , , ,
(4) Population initialization: generate the random values for and of each particle in the feasible domain,
  calculate the of initial population.
(5) Set , () and .
(6) Iterations
(7) while
(8)  
(9)  for to
(10)   for to
(11)    Update the velocity of particle by using (1).
(12)    Update the position of particle by using (2).
(13)    if or
(14)     Update the value of by using (17).
(15)    end if
(16)   end for
(17)  end for
(18)  Calculate by using the (9).
(19)  Sort the particle population in ascending order and select the particles with better fitness.
(20) Generate explosion sparks by using Algorithm 1.
(21)  Calculate the fitness of explosion sparks and storage the best explosion spark .
(22) Generate mutation sparks by using Algorithm 2.
(23) Select the individuals from the explosion sparks and mutation sparks by using the selection strategy.
(24) Combine the particles with individuals to generate the new population.
(25) Calculate and of new population.
(26) end while
(27) Output:

4. Problems, Experiments, and Discussion

4.1. Test Problems

In order to evaluate the efficacy and accuracy of the proposed algorithm, the performance of PS-FW is tested by the 22 high-dimensional benchmark functions. The test problems which consist of multimodal functions and unimodal functions are listed in Table 1, and the corresponding optimal solutions and search scope are presented in Table 1. Compared with solving unimodal problems, it is difficult to find the global optimum of multimodal problems because the local optima will induce the optimization algorithms’ fall into their surroundings. Therefore, if the algorithm can efficiently find the optimal solutions of multimodal functions, it can be proved that the algorithm is an excellent optimization algorithm.

4.2. Comparison of PS-FW with PSO and FWA

In this section, we compare the performance of the PS-FW with the PSO and FWA based on the 22 benchmark functions. In order to explore global optimization capability of the three algorithms on solving the high-dimensional optimization problem, three experiments with different dimensions are carried out. The dimensions of experiments are set to , , and , respectively, and each algorithm is used to solve all the benchmark functions 20 times independently. In order to make a fair comparison, the general control parameters of algorithms such as the maximum number of iterations () and the population size () are set to be of the same value. is set to 1000 and is set to 50 for each function. Besides, the algorithms used in the experiment are coded by MATLAB 14.0, and the experiment platform is a personal computer with Core i5, 2.02 GHz CPU, 4 G memory, and Windows 7. For the purpose of eliminating the impact on performance caused by the difference in parameter settings, the main control parameters of PS-FW algorithm are consistent with those of PSO and FWA, and the other detailed control parameters are shown in Table 2.

For all the benchmark functions, the mean and standard deviation of best solutions obtained by PS-FW and other algorithms in 20 independent runs are recorded, and the optimization results are shown in Tables 35. Meanwhile, the ranks are also presented in tables, and the three algorithms are ranked mainly based on the mean of best solutions. In addition, the average convergence speed of the proposed PS-FW is compared with other algorithms for functions , , and ; therefore the convergence curves are shown in Figure 3.

According to the ranks shown in Tables 35, the average values of best solutions for the proposed PS-FW outperform those of the other algorithms. Besides, the performance of PS-FW over standard deviation of best solutions is also better than the rest of the algorithms. For 22 problems with , the PS-FW can obtain the global optimum of , , , , , , , , , , , and , which shows excellent ability for solving optimization problems. As the dimensions of problems increase, the hybrid algorithm maintains outstanding performance and obtains the optimal solutions of the 10 functions, except for functions and , compared with results in Table 3. When the dimensions of problems are 60 and 100, PS-FW can get the global optimum of functions and , but not each run can succeed. This is because functions and are multimodal problems and the number of local optima increases rapidly as the dimensions of the problems increase, which adds the difficulty of avoiding trapping in the local optima. In addition, according to the ranks and values shown in Tables 35, the PS-FW can get the highest rank for all the functions. It is also needed to point out that the PS-FW obtains more stable solutions than PSO and FWA for all problems with the increasing of dimensionality. The convergence speed of the three algorithms can be seen in Figure 3, and the descend rate of average best solutions of PS-FW is obviously higher than the other two algorithms. This is because the advantages of PSO and FWA are combined into the PS-FW so that the hybrid algorithm enhances its global and local search ability. Therefore, PS-FW is efficient and robust in dealing with the high-dimensional benchmark functions.

From the above analysis, it is possible to show that the PS-FW algorithm performs well in solving the functions in Table 1. However, because the optimums of these functions are mostly at the origin, we need to further explore the performance of PS-FW algorithm on the nonzero problems. Then the experiment of nonzero problems is carried out to prove the comprehensive performance of PS-FW. In this experiment, the optimums of test functions derived from Table 1 are shifted and the specific values are displayed in Table 6. In addition, in order to achieve a fair comparison between the experiments, the parameters settings of three algorithms are consistent with Table 2 and the dimension is set to . The optimization results of three algorithms are shown in Table 7 and the convergence curves of three algorithms over functions , , and are displayed in Figure 4.

From Table 7, we can know that the PS-FW algorithm keeps high performance and can obtain the optimal solutions of 11 functions in Table 6. Besides, the PS-FW achieves the best rank of three algorithms for all the functions with shift optimums, which present the powerful solving ability over optimization problems with nonzero optimums. By comparing Table 7 with Table 3, it is known that fireworks algorithm is relatively weak in searching for nonzero optimums. However, the PS-FW algorithm that derives from the fireworks algorithm and covers operators of PSO shows better performance, which demonstrates the correctness of the combination of the two algorithms. In addition, the result of PS-FW over function 16 is worse than the previous experiment. This is because is a multimodal function and the slight deviations from the optimums can cause the significant increase in the value of the objective function. By observing the convergence curves in Figure 4, we can state that the convergence speed of the PS-FW also remains fast. In order to determine whether the convergence performance of PS-FW algorithm is superior to the other two algorithms more clearly, we compute the number of successful runs (success rate) and the average number of iterations in successful runs for each function in Table 6. The optimal solutions obtained by different algorithms are various, so we define the convergence criterion for each function. The convergence criterion can be introduced as that if the best solutions found by each of algorithms are satisfying (18) in a run [39], the run is considered to be successful, and the minimum number of iterations satisfying the convergence criterion is counted to calculate the average number of iterations. where is the optimum of function and denotes the error of algorithm.

Suppose that denotes the number of successful runs, indicates the average number of iterations in successful runs, and denotes the iterations number when there are no successful runs after 20 runs and its value is set to greater than ; then Table 8 is shown as follows.

According to the statistical results and ranks presented in Table 8, the success rate and the average iterations number of PS-FW in 20 runs are both superior to other algorithms. For all the benchmark functions in Table 6, the proposed PS-FW can satisfy the convergence criterion for all the 20 runs, whereas the other algorithms can only converge to the criterion for several functions. In addition, the PS-FW obtains the highest ranks for the average number of iterations in successful runs and can converge to the criterion by a relatively small number of iterations. In summary, the PS-FW outperforms the other algorithms in terms of stability and convergence speed and is an efficacious algorithm for optimization problems whose optimums are at origin or are shifted.

4.3. Comparison of PS-FW with PSO Variants

In this section, we compare the performance of the proposed PS-FW with several existing variants of PSO which are introduced in a published paper. The comparison is based on the 12 benchmark functions introduced in the paper of Nickabadi et al. [22] and the orders of functions are consistent with that in this paper. In order to make a fair comparison, the run times and maximum iterations of PS-FW are set to 30 and 200,000, respectively, and the other parameters are set to be the same as those in Section 4.2. The dimension of test problems is set to , and the mean and standard deviation of best solutions obtained by algorithms are calculated. The contrast results are presented in Table 9, and the rank of each algorithm is counted and shown.

According to the results of Table 9, the PS-FW outperforms the other six PSO variants on both the average values and standard deviation of best solutions after 200,000 iterations. Among the 12 benchmark functions, the PS-FW can obtain the optimum of 10 functions, which manifests the highly powerful ability to find the global optimal solution. In addition, the PS-FW acquires the highest rank over almost all the test problems except the function , which indicates the PS-FW has significant improvement than other algorithms. Besides the analysis of numerical results obtained by PS-FW and other algorithms, we applied the nonparametric statistical tests to prove the superiority of the PS-FW. The Friedman test and Bonferroni-Dunn test are adopted to compare the performance of PS-FW with the other algorithms.

The Friedman test is a multiple comparison test, to detect the significant differences among algorithms based on the sets of data [40]. The algorithms are ranked in Friedman test, which means the algorithm with the best performance is ranked minimum, the worst gets the maximum rank, and so on. In this section, the mean and standard deviation of best solutions based on Table 9 are conducted with the Friedman test; therefore the results are given in Table 10. Through observing the results of Friedman test in Table 10, all the value are lower than the level of significance considered , which indicates that the significant differences among the seven algorithms do exist. According to the ranks obtained by the Friedman test in Table 10, the PS-FW has the best performance on the mean and standard deviation of best solutions followed by ALWPSO, CLPSO, and the other four algorithms. Therefore, we can conclude that the accuracy of solutions obtained by PS-FW is better than other algorithms. However, the Friedman test can only detect whether there are significant differences among all the algorithms, but is unable to conduct the proper comparisons between PS-FW and each of the other algorithms. Hence the Bonferroni-Dunn test is executed to check the superiority of PS-FW.

The Bonferroni-Dunn test can be very intuitive to detect the significant difference between the two or more algorithms. For Bonferroni-Dunn test, the judgment condition for the existence of significant difference between the two algorithms is that their mean ranks differ by at least the critical difference (), and the equation of calculating the critical difference is as follows [41]: where and are the number of algorithms and benchmark functions and the critical values at the probability level are presented as follows:

By utilizing (19) and (20), the critical difference is shown as follows:

Here we carry out the Bonferroni-Dunn test for the mean of best solutions, success rate, and average number of iterations of successful runs on the basis of the ranks obtained by the Friedman test. In order to provide a more intuitive display of the results obtained by Bonferroni-Dunn test, we illustrate the critical differences among the seven algorithms in Figure 5. For the purpose of comparing the algorithms clearly, a horizontal line which indicates the threshold for the best performing algorithm (the one with pink color) is drawn in the graphs. In addition, another two lines which represent each level of significance considered in the paper are also drawn, and their heights are equal to the sum of minimum rank and the corresponding . Then if the bars exceed the lines of significant level, the corresponding algorithms are proved to have worse performance than the best performing algorithm. By observing the results of Bonferroni-Dunn test in Figure 5(a), the bar of the PS-FW has the lowest height among all the algorithms, and the heights of bars corresponding to the stdPSO, CPSO, FIPS, and Frankenstein exceed the lines of significant level, which indicates that the PS-FW performs significantly better than these four algorithms over the solutions accuracy. In addition, the PS-FW acquires the best rank over the standard deviation according to Figure 5(b), and the PS-FW has the obvious advantage compared to the stdPSO, CPSO, FIPS, and Frankenstein. Therefore, we can conclude that the PS-FW is the best performing algorithm followed by ALWPSO, CLPSO, and other four algorithms, and the advantages of PS-FW on the efficiency and solutions accuracy compared with other algorithms are definitely proved.

Besides the above analysis, we count the number of successful runs and the average number of iterations in successful runs for the PS-FW over 12 benchmark functions, and the statistical results are presented in Table 11. In this section, a successful run means the algorithm can obtain the optimum within the 200,000 iterations. As shown in Table 11, the PS-FW can converge to the optimal solution in each of runs over the vast majority functions, which manifests the robustness of PS-FW in solving the optimization problems. In order to compare the convergence speed of PS-FW with other algorithms fairly, the average numbers of iterations in successful runs are compared over the six functions , , , , , and introduced in Nickabadi et al.’s paper. According to the numerical results in Table 11, the PS-FW can converge to the optimal solution for all the six functions within 12,000 iterations, whereas the other algorithms have difficulty in obtaining the optimum for functions , , , and after 200,000 iterations or can converge to the optimum for functions , with a lot more iterations based on the convergence curves in the paper by Nickabadi et al. Therefore, we can argue that the robustness and convergence speed of PS-FW are superior to the other algorithms.

4.4. Experiments to Analyze the PS-FW Control Parameters

In this section, we investigate the impact of the control parameters on the performance of PS-FW. From the previous introduction, the PS-FW has several control parameters including the parameters adopted from PSO and FWA. Here we only analyze the three main control parameters which are the control factors of explosion amplitudes , and the number of mutation sparks . In order to test the impact of changes in control parameters on performance exhaustively, six different combinations of parameters were selected and experimented on. Each set of parameters corresponds to 20 runs based on 22 functions introduced in Table 1, and the dimensions of problems are set to 100. Moreover, the other parameters settings of PS-FW except , , and are the same as those in Section 4.2. In addition, the six combinations of control parameters are represented as six optimization strategies, and their detailed parameters settings are shown in Table 12, and the control parameters of Section 4.2 are marked as Strategy-1 and are presented. As shown in Table 12, we take a contrasting method that changes a parameter and keeps the other parameters unchanged. Then the optimization results and the corresponding ranks of different strategies are shown in Tables 13 and 14, and the results focus on mean and standard deviation of best solutions obtained by different strategies. From the results of Tables 13 and 14, the PS-FW with Strategy-6 and Strategy-7 has the best performance for almost all the benchmark functions and can obtain the highest ranks over both the mean and standard deviation of best solutions. By adopting Strategy-6 and Strategy-7, the PS-FW can get the optimum of 16 functions for the whole 20 runs, especially including the functions , , , , , and which cannot find the global best solutions by other optimization strategies of PS-FW. Therefore, the excellent performance of PS-FW with Strategy-6 and Strategy-7 proves the correctness of proposed mutation operator and indicates that increasing the number of mutation sparks can enhance the global search capability of the algorithm. However, according to the “no free lunch theorem” [42], there is no algorithm that can perform better than others on all the problems; hence the PS-FW with Strategy-6 and Strategy-7 has poor performance for function . It is because function has a wide search scope so that the solutions have little changes in the later iterations if is small, which results in a relatively slow convergence speed for PS-FW despite the increase in the number of mutation sparks. For other strategies of PS-FW, the different strategies have their own advantages for various test functions, the PS-FW with Strategy-1 performs well for functions , , , , and , and the good solutions can be obtained by PS-FW over functions , under Strategy-2 and Strategy-3. Meanwhile, the PS-FW with Strategy-4 and Strategy-5 works well in solving the functions and . In addition, the PS-FW can obtain the optimum of functions , , , , , , , , , and and keep outstanding performance in other functions under the whole seven strategies. Therefore, the robustness of the proposed algorithm is strongly proved. To compare the convergence speeds for different strategies of PS-FW, the convergence curves over several functions are shown in Figure 6. By observing the curves in Figure 6, the superiority of Strategy-6 and Strategy-7 in terms of convergence speed has been demonstrated, and the PS-FW with all strategies can converge to solutions that are very close to the optimums. Then we conduct the Friedman test and the Bonferroni-Dunn test for the mean and standard deviation of best solutions obtained by different optimization strategies, so as to determine the impact degree of each control parameter on the performance of PS-FW. The results of Friedman test for different strategies of PS-FW are shown in Table 15, and the results of Bonferroni-Dunn test in terms of mean and standard deviation based on Table 15 are presented in Figures 7 and 8.

According to the results of Friedman test in Table 15, the value is lower than the level of significance considered for both the mean and standard deviation of bets solutions, which indicates that the performance of seven strategies of PS-FW has the significant difference. By observing the ranks obtained by the Friedman test in Table 15, the PS-FW with Strategy-7 has the best performance followed by Strategy-6, Strategy-1, and so on, and the PS-FW with Strategy-2 performs the worst relative to other strategies over the average values of best solutions. In Bonferroni-Dunn test, the values of critical difference are the same as those in Section 4.2, and the lines of best rank and significant level are also drawn in Figures 7 and 8. Through checking the bars corresponding to the different strategies of PS-FW in Figure 7(a), the heights of bars for Strategy-1 to Strategy-5 exceed the lines of significant level. Hence Strategy-7 represents the best combination of control parameters among all the seven strategies, and the PS-FW with Strategy-7 performs significantly better than the other strategies except Strategy-6. In addition, the PS-FW with Strategy-6 has significant superiority compared with Strategy-2 to Strategy-5 over the average values of best solutions based on Figure 7(b). Besides, as shown in Figure 8, the hybrid algorithm with different strategies has relatively small gaps in standard deviation, Strategy-7 emerges as the best performer over the standard deviation of best solutions followed by Strategy-6, Strategy-1, and other strategies, and Strategy-4 has the worst performance.

Therefore, based on the analysis above, the solutions accuracy and convergence speed of PS-FW are determined by the control parameters , , and . Compared with and , the number of mutation sparks has a greater impact on the performance of PS-FW. Hence we can appropriately increase the number of mutation sparks when solving the difficult multimodal global optimization problems. In addition, the value of can be increased properly for solving the optimization problems with large range such as function . Considering that the increase in the number of mutation sparks will make the computing time longer, to improve the computational efficiency, Strategy-1 which ranks third in seven strategies is used to conduct the experiments in Sections 4.2 and 4.3 in this paper. As expected, we should choose the suitable control parameters for various problems by taking all the aspects into consideration.

5. Conclusion

In this paper, a hybrid algorithm named PS-FW is proposed to solve the global optimization problems. In PS-FW, the exploitation capability is applied to find the optimal solution and make the hybrid algorithm converge quickly whereas the exploration ability of FWA is used to search for the better solutions in the entire feasible space. Moreover, the abandonment and supplement mechanism, the modified explosion operator, and the novel mutation operator are proposed to enhance both the global and local search ability of algorithm. Then the validity of PS-FW is confirmed by the 22 well-known high-dimensional benchmark functions. The results show that PS-FW is an efficacious, fast converging, and robust optimization algorithm by comparing with the PSO, FWA, stdPSO, CPSO, CLPSO, FIPS, Frankenstein, and ALWPSO over solving global optimization problems.

The future work is to refine the PS-FW by testing more complex high-dimensional optimization problems. Furthermore, we will try to apply the algorithm to multiobjective optimization problems and real-world problems such as spatial layout optimization, route optimization, and structural parameter optimization.

Conflicts of Interest

The authors declare that there are no conflicts of interest regarding the publication of this paper.

Acknowledgments

This study was funded by National Natural Science Foundation of China (nos. 51674086 and 51534004) and Northeast Petroleum University Innovation Foundation for Postgraduate (no. YJSCX2015-012NEPU).