Abstract

Particle swarm optimization (PSO) has been proven to show good performance for solving various optimization problems. However, it tends to suffer from premature stagnation and loses exploration ability in the later evolution period when solving complex problems. This paper presents a sequential hybrid particle swarm optimization and gravitational search algorithm with dependent random coefficients called HPSO-GSA, which first incorporates the gravitational search algorithm (GSA) with the PSO by means of a sequential operating mode and then adopts three learning strategies in the hybridization process to overcome the aforementioned problem. Specifically, the particles in the HPSO-GSA enter into the PSO stage and update their velocities by adopting the dependent random coefficients strategy to enhance the exploration ability. Then, the GSA is incorporated into the PSO by using fixed iteration interval cycle or adaptive evolution stagnation cycle strategies when the swarm drops into local optimum and fails to improve their fitness. To evaluate the effectiveness and feasibility of the proposed HPSO-GSA, the simulations were conducted on benchmark test functions. The results reveal that the HPSO-GSA exhibits superior performance in terms of accuracy, reliability, and efficiency compared to PSO, GSA, and other recently developed hybrid variants.

1. Introduction

As many real-world optimization problems become increasingly complex, traditional optimization algorithms cannot sufficiently satisfy the problem requirements and better effective optimization algorithms are needed. Hence, various kinds of metaheuristic algorithms that are inspired by natural phenomena have launched into a center stage in recent decades for solving complex optimization problems. Genetic algorithm (GA) [1], particle swarm optimization (PSO) [2], artificial immune system (AIS) [3], differential evolution (DE) [4], ant colony optimization (ACO) [5], glowworm swarm optimization (GSO) [6], artificial bee colony (ABC) [7], gravitational search algorithm (GSA) [8], grey wolf optimization (GWO) [9], cat swarm optimization (CSO) [10], harmony search algorithm (HS) [11], and bacterial foraging optimization algorithm (BFOA) [12] have been developed in recent years by researchers and have shown superior performance for solving a wide range of optimization problems, such as function optimization [49, 1115], fuzzy inference system [16, 17], image processing [18, 19], economic dispatch [20, 21], and neural networks training [22, 23].

Overall, the review of the presented literature states that there is no single superior method for solving optimization problems. That is, no one algorithm can solve all of the optimization problems, but each algorithm can solve a special class of problems. Although the aforementioned algorithms that have been proposed to solve optimization problems do achieve good performance, there are still undesirable shortcomings. For instance, PSO often suffers from premature convergence, whereas it tends to be trapped into local optima due to the rapid convergence speed [24]. GSA requires a long computation time to find the solution for some problems [21]. Hence, there is a lot of room for improvement in finding the better optimization algorithm.

Another issue is how to balance the exploration/exploitation search ability for a single metaheuristic algorithm including PSO or GSA. The key operation of metaheuristic optimization algorithms is how to keep a better trade-off between exploitation and exploration abilities in the searching process. A good algorithm should have the capability of these two abilities to seek the global optimal solution. However, some algorithms present more outstanding advantage on one of these abilities. For instance, PSO has a tendency to rapid convergence in a multivariable optimization problem. By comparison, GSAs global exploration performance is particularly conspicuous. Hence, PSO and GSA approaches possess respective advantages and potentialities. It encourages us to develop an appropriate hybridization technique of different metaheuristic algorithms to mitigate the weakness of the original algorithm and obtain the outstanding optimization performance against a single algorithm, thereby acquiring rapid response and avoiding premature convergence.

Inspired by abovementioned ideas, to further improve the respective drawbacks of PSO and GSA, a novel combination strategy that integrates PSO with GSA, sequential hybrid particle swarm optimization and gravitational search algorithm with dependent random coefficients called HPSO-GSA, is proposed in this paper based on a sequential hybrid pattern. To be specific, we first propose a new velocity updating equation for PSO based on dependent random coefficients strategy to enhance the balance between exploitation and exploration search. Second, the existing PSO evolution framework is improved, and the GSA is incorporated into the PSO by using a sequential mode when the swarm drops into local optimum and fails to improve their fitness. That is, the HPSO-GSA first enters into the PSO phase to update its velocity and position. Then, the GSA operator is carried out on condition that the fixed iteration interval cycle or adaptive evolution stagnation cycle strategies are met in the process of evolution. Finally, the performance of the proposed algorithm is evaluated against PSO, GSA, and state-of-the-art hybrid variants by using a set of benchmark test functions. The results reveal that the proposed HPSO-GSA can achieve better optimization performance compared with the involved algorithms.

The remainder of this paper is organized as follows. A brief review of related works on PSO and GSA is given in Section 2. Section 3 introduces the proposed HPSO-GSA approach. In Section 4, the experiments, comparisons, and discussion for the used benchmark test problems are carried out to evaluate the performance of the proposed algorithm. Finally, the conclusion and future work are given in Section 5.

In this section, we first introduce the relevant backgrounds including the PSO and GSA algorithms. Next, the state-of-the-art PSO and GSA hybrid variants are reviewed.

2.1. Particle Swarm Optimization (PSO)

PSO is a population-based metaheuristic optimization method, which was originally introduced by Kennedy and Eberhart [2]. Since its development, PSO has become one of the most promising optimizing techniques for solving global optimization problems. The algorithm is motivated by intelligent collective behavior like the movement of a flock of birds, a school of fish, and a group of ants to seek for foods. Compared to other optimization techniques, PSO is easy to implement with few parameters to adjust and is computationally inexpensive. PSO does not require any gradient information from the objective functions, and it uses only primitive mathematical operators through the exchange of information among candidate individuals (particles), in order to attain desirable optimization performance. In the past decades, the PSO has been shown to successfully solve a wide range of optimization fields. It achieves better results more speedily and more cheaply than other methods, such as GA, DE, and ACO [25].

Suppose that each particle which represents a potential solution of the problem in the population size N flies through a D-dimensional search space. It is associated with two vectors, namely, a position vector and a velocity vector for the current iteration t. The personal best experience of the ith particle is represented by , and the best global position of the swarm found so far is stored in . The initial position for each particle in the population is randomly generated from in the range of the decision space of the problem. The initial velocity for each dimension is set as a zero vector. Then, the particle’s velocity and its new position are updated by the following equations:where . and are two mutually independent random coefficients drawn from uniform distributed within the range [0, 1]. is the linear time-varying inertia weight (TVIW) factor within the interval [0.4, 0.9] suggested by Shi and Eberhart [26]. Its goal is to control the impact of the previous velocity on the current velocity, thereby influencing the trade-off between global exploration and local search of the particles. and are two linear time-varying acceleration coefficients (TVAC) suggested by Ratnaweera et al. [27]. Their objectives are to control the influence of personal and swarm best experiences, respectively.

2.2. Gravitational Search Algorithm (GSA)

GSA inspired by the Newton law of gravity and mass interactions is a newly developed swarm-based metaheuristic optimization method [8]. In GSA, all agents are regarded as objects including different masses, and their performances are evaluated by their masses using a fitness function. Each agent attracts each other agent through the gravity force which is directly proportional to the product of their masses and inversely proportional to the square of the distance between them. This force leads to global movement of all agents towards heavier masses with the aid of the Newtonian law of motion. The heavier agent that corresponds to better solution to a problem moves more slowly than the lighter one. Then, it is concluded that masses should be attracted by the heaviest mass which represents an optimum solution in the search space by lapse of time.

Let us consider a population including N agents (masses) in a D-dimensional decision space, and the position vector of the ith agent is described as at iteration t, where represents the position of the ith agent in the dth dimension at iteration t. The algorithm initializes the N agents randomly in the given decision space. During the evolution process, the gravitational force acting on agent i from agent j at iteration t is defined as (3) and the whole force that acts on agent i in the dth dimension with randomly weighted sum of the dth component of the forces coming from other agents are given as (4):where represents the active gravitational mass related to agent j and represents the passive gravitational mass related to agent i. is the gravitational constant defined by (5). is a small constant value. is the Euclidian distance between agents i and j defined by (6). is uniform random number in the interval [0, 1]. represents the set of first agents with the best fitness value and the biggest mass, which is defined as a function of time with the original value at the beginning, and it is linearly decreased to 1 by lapse of iteration. Finally, there will be just one agent applying force to the others:

According to the Newton law of motion, the acceleration of an agent is proportional to the resultant force and inverse of its mass, so the acceleration of the ith agent in dth dimension at iteration t is denoted as follows:

The next velocity of agent i is defined as a fraction of its current velocity added to its acceleration. Sequentially, its next position is calculated based on the corresponding velocity:

The mass of agent i is calculated by (9), and the normalization of the calculated mass is given as (10):where represents the fitness value of agent i at iteration t and best (t) and worst (t) represent the best and worst fitness value in the current population at iteration t, respectively. The gravitational mass represents the mass of agent i at iteration t which embodies the fitness evaluation value of agent i.

For a minimization optimization problem, best (t) and worst (t) are defined as follows:

2.3. State-of-the-Art PSO and GSA Hybrid Variants

Extensive amounts of works have been performed to improve the original PSO and GSA performance. Among these works, studies on hybrid systems that combine skilled metaheuristic optimization algorithms to obtain good compromise between exploration and exploitation have gained extensive popularity. The most classical PSO variants have been reported in [14, 15, 23, 2836]. Kao and Zahara [28] proposed the hybridization strategy of PSO and GA (GAPSO) for solving multimodal test functions. In GAPSO, individuals in a new generation are drawn not only from crossover and mutation operation in GA but also from movement mechanism in PSO. The results show the superiority of the hybrid GAPSO approach in terms of solution quality and convergence speed. Esmin et al. [29] introduced a PSO algorithm coupled with GA mutation operator, namely, HPSOM, for solving unconstrained global optimization problems. Shunmugalatha and Slochanal [30] proposed a hybrid particle swarm optimization (HPSO), which incorporated the crossover, mutation operators, and subpopulation process in the genetic algorithm into particle swarm optimization. The implementation of HPSO on test functions shows that it converges to better solution much faster. Zhang and Xie [31] introduced a hybrid particle swarm optimization with a differential evolution operator (DEPSO), which provides the bell-shaped mutation with consensus on the population diversity along with the evolution. A set of benchmark functions was applied to evaluate its efficiency. A hybrid algorithm named DE-PSO was proposed by Zhang et al. [32], which incorporates concepts from DE and PSO, updating particles not only by DE operators but also by mechanisms of PSO. The proposed algorithm was tested on several benchmark functions. Liu et al. [14] proposed a novel hybrid algorithm named PSODE, where DE is incorporated to update the previous best positions of PSO particles to force them to jump out of local attractor in order to prevent stagnation of population. Besides GA and DE, PSO has been hybridized with extremal optimization (EO) [15], central force optimization (CFO) [23], estimation of distribution algorithm (EDA) [33], artificial immune system (AIS) [34], gravitational search algorithm [35], and teaching-learning-based optimization (TLBO) [36]. Overall, these PSO-based hybrid variants have been successfully utilized for solving global optimization problems.

Similarly, a novel hybrid version of GSA variants has also been reported in [18, 22, 3739]. For instance, Mirjalili et al. [13, 22] proposed a novel hybrid PSOGSA algorithm by adopting a parallel model for solving benchmark function optimization and feedforward neural networks. A hybrid approach that integrated differential evolution into gravitational search algorithm (DE-GSA) for unconstrained optimization was introduced by Li et al. [37]. Chen et al. [38] proposed a hybrid gravitational search algorithm combined with simulated annealing (GSA-SA) for the traveling salesman problem. A new hybrid approach, namely, genetic algorithm-based gravitational search algorithm (GA-GSA), was proposed to solve image segmentation [18]. A novel GSA-SVM hybrid system which hybridizes the GSA with support vector machine (SVM) was proposed to improve classification accuracy with an appropriate feature subset in binary problems [39]. Apparently, these hybrid systems of GSA have demonstrated powerful results when compared with other approaches such as the original GSA itself, DE, GA, and PSO.

2.4. Comparison of Particle Swarm Optimization and Gravitational Search Algorithm

To thoroughly understand the two metaheuristic optimization methods, we have identified three similarities and four differences between the PSO and the GSA. The similarities are as follows: (1) both are population-based metaheuristic algorithms; (2) particles/agent positions are updated by iteration; and (3) both algorithms use velocity formulations for position updating. On the other hand, they differ in the following aspects: (1) PSO simulates the social behavior of birds, whereas GSA was inspired by a physical phenomenon; (2) PSO employs fitness values for the two best positions pbest and gbest, while GSA uses fitness values to calculate masses that are proportional to gravitational forces; (3) PSO particles update their positions by means of dynamic velocities with cognitive and social behaviors, while GSA agents calculate their positions using changing accelerations with the concept of Newtonian gravity; and (4) PSO uses memory to store and update the velocity with the pbest and gbest, while GSA is memoryless and is concerned exclusively with the current status. Therefore, PSO and GSA have their respective specialties and potentialities to find optimum solutions. It encourages us to further design a hybridization of these two techniques to obtain better optimization performance.

3. Sequential Hybrid Particle Swarm Optimization and Gravitational Search Algorithm

As is well known, PSO ensures that the optimization process converges faster, whereas GSA assures that the search can jump out of local optima by maintaining the diversity in the swarm [40]. Moreover, they have different search properties and movement mechanisms. Hence, we propose the new sequential hybrid version HPSO-GSA, where PSO is integrated with GSA to combine the merits of both algorithms. To be specific, first, to further balance between global and local search of the PSO, dependent random coefficients (DRCs) strategy (Section 3.1) is introduced into the HPSO-GSA. Second, to decrease the computational cost due to GSAs integration in the hybridization, two GSA-embedded strategies, namely, fixed iteration interval cycle (FIIC) (Section 3.2) and adaptive evolution stagnation cycle (AESC) (Section 3.3), are introduced into the algorithm. Finally, computational complexity of the algorithm in Section 3.4 is theoretically analyzed based on main operators.

3.1. Dependent Random Coefficients (DRCs)

As shown in equation (1), the two random coefficients and are generated independently, so in some cases, the values of the and are too large or too small. For the former case, both the personal and social influences are excessively evaluated and the particles are driven too far away from the suboptimum solution. In the latter case, both the personal and social influences are negligible and the convergence speed of the algorithm is sharply reduced. To alleviate these phenomena, the dependent random coefficients strategies based on these random variables are introduced as follows:

To demonstrate the impact of the DRC strategy on the evolution of the population, an experiment was performed on the Rosenbrock and Ackley test functions. The average velocity of particles varying throughout iterations is shown in Figure 1. On the one hand, it is desirable that the particles with high velocities can explore large areas in the decision space to find new regions. From Figure 1, we can see that the DRC strategy relatively increases the average velocity of the population in the early iterations, thereby improving the diversity of the swarm that provides the particles with the ability to jump out of premature convergence. On the other hand, in the later stage, the particles in the population need to exploit local regions more precisely to improve their performances. It is obvious from the results that the average velocity of the algorithm with the DRC strategy has a lower value than that of the algorithm with independent random coefficients. In this case, the particles can find a better solution with a faster convergence speed in the last iterations. Instead, the velocity of the algorithm with independent random variables decreases slowly. Then, the particles need more iterations or runtimes to seek an optimum solution.

3.2. Fixed Iteration Interval Cycle (FIIC)

A FIIC strategy is that the GSA is implanted into the PSO evolution framework according to a fixed iteration interval frequency Tf. For instance, Tf is set to 10; that is, the GSA can be introduced into the PSO every ten iterations. However, the optimal setting of the parameter Tf is relatively challenging and dependent on the function of termination condition such as maximum iterations. In most cases, the parameter value Tf is usually determined through the experience or the parameter sensitivity analysis. Hence, in this paper, the parameter sensitivity analysis by means of the experiments was carried out and the proper parameter value is determined. Too large or too small values of Tf are not desirable, as the former may be difficult to utilize the contribution of the GSA, whereas the latter leads to waste computation resources, thereby degrading the algorithm convergence speed. According to the results of the multigroup experiments, the Tf can be set to a large value in the range [30, 50] to save the convergence time on condition that the test function is unimodal or multimodal with relatively less local optima. Similarly, the Tf can be set to a small value within the range [1, 32] to enhance the global convergence ability on condition that the optimization function is multimodal with many local optima. In this work, the Tf is fixed at 20 according to the parameter sensitivity analysis shown in Section 4.3.

3.3. Adaptive Evolution Stagnation Cycle (AESC)

To adaptively implant the GSA into the PSO framework to guide the particles access to potential regions during the entire evolution process, we employ an AESC strategy in the HPSO-GSA to effectively judge whether the algorithm reaches the premature stagnation stage. That is, the particles are found to be trapped into local optima. The AESC strategy is defined as follows:

Definition 1. (evolution stagnation). Suppose the fitness evaluation function for a given optimization problem is defined as , and the global best position of the population found so far at iteration t is described as . For a small positive constant , if and only if is satisfied, in such a way, the population presents evolution stagnation situation at iteration t+1 in the evolving process, where is the evolution stagnation radius.

Definition 2. (evolution stagnation cycle). For a small position constant , if the population is unvaryingly kept in evolution stagnation at minimum successive iterations for radius , then the value is defined as evolution stagnation cycle for radius .
The evolution stagnation cycle is calculated by (14) at the successive evolving process:In the HPSO-GSA, the AESC strategy is applied when the condition is satisfied; that is, evolution stagnation cycle reaches or exceeds its threshold value . In this case, the GSA operator is added to the PSO to increase its flexibility for solving more complicated problems. Hence, the parameter has a direct impact on the performance of the HPSO-GSA. Too small or too large value of is undesirable for the HPSO-GSA. In our work, we set in terms of the result of the parameter sensitivity analysis shown in Section 4.3.

3.4. Computational Complexity Analysis

Computational complexity is usually regarding the analysis of storage space requirements and computational time costs. In most cases, the time complexity analysis is the main issue for population-based metaheuristic algorithms [15, 23]. Generally, the time complexity of an algorithm is proportional to the number of dimensions and the number of particles or agents in the swarm when only considering a main loop and therefore can be calculated according to their main operations and worst case in a full iteration cycle. A greater number of dimensions or a larger population size will directly result in higher running time. As a result, the computational step analysis for PSO, GSA, and HPSO-GSA in an iterative loop is given in Table 1. From Table 1, the time consumed considering their main operations and worst case in an iterative loop for the PSO and the GSA is and , respectively. Therefore, the worst-case consuming time for the HPSO-GSA in a single iterative loop is . Apparently, the consuming time of the algorithm is proportional to the number of particle and the dimension of decision space in the population. A larger population size and/or a greater dimension will directly result in more running time. Then, the time complexity for the HPSO-GSA is greater than that of the PSO or the GSA when the GSA is entered into the PSO. However, the hybrid approach can avoid premature convergence and thus is capable of escaping from local optima with the help of an increase in diversity. Therefore, the HPSO-GSA is still a very competitive optimization approach at the expense of a little higher time resource.

In order to describe clearly the steps of the proposed algorithm, the detailed pseudocode of the HPSO-GSA algorithm is summarized in Figure 2. It is obvious that the HPSO-GSA procedure is mainly dependent on the PSO algorithm with DRCS, and the GSA can be allowed to perform when the AESC and/or the FIIC strategies are satisfied.

4. Experimental Setup, Results, and Discussion

In this section, the experimental studies that have been performed to investigate the performance of the proposed HPSO-GSA method for classical benchmark test functions are presented. We first describe the benchmark test functions in Section 4.1. Second, the experimental design including parameter settings of the involved algorithms for comparison is described in Section 4.2. The parameter sensitivity analysis of Tf and and the effect of different strategies in the HPSO-GSA are discussed in Sections 4.3 and 4.4, respectively. Finally, the performance investigation and comparison of the proposed algorithm is evaluated between the PSO, the GSA, and other variants of hybrid algorithm.

4.1. Benchmark Test Functions

The classical benchmark test functions with different complexities of the fitness landscape are shown in Table 2. These functions were considered in the study [15, 41] as well. This table consists of the brief descriptions of function expressions, their feasible domains, their optimal positions, and global minimum. All the functions are minimization problems. The variable D denotes the dimensions of the test functions. These functions, grouped into three sets, were designed to evaluate various aspects of algorithms. The first set of f1 (x) to f3 (x) consists of unimodal test functions. The second set of f4 (x) to f9 (x) is multimodal high-dimensional test functions with many local optima. Moreover, f8 (x) and f9 (x) are hybrid composition test functions. The third set, given by f10 (x), consists of multimodal test functions with fixed dimensions.

4.2. Parameter Settings of the Involved Algorithms

In this section, we employ PSO [26], GSA [8], and five hybrid variants such as DEPSO [31], GAPSO [28], DE-GSA [37], GA-GSA [18], and PSOGSA [22] for comparison with the HPSO-GSA. These algorithms have a few parameters, some of which are common and others are specific to the algorithms.

Common parameters are the number of dimensions for the search space, the maximum number of iterations, the population size, and the total number of trials. For all test functions, except the 2D function Schaffer, we test the experiments with 30 dimensions, that is, D= 30. To assure the fair assessment between HPSO-GSA and its peers, all PSO variants are run independently 100 times on the test functions employed. The population size is also set to 60. The maximum number of function evaluations FEmax is 5000 for the Schaffer function and 30,000 for the remainders, respectively. The evolution stagnation radius is chosen as 1.0e − 2. The convergence criterion for test functions in 30 dimensions is fixed at 1.0e − 3. Each run stopped when the maximum number of iterations or the convergence criterion is reached. Similarly, the population is initialized with its position and velocity, both of which are randomly selected from the range . We set . The upper and lower bounds of the particle’s position are limited to the interval , and the maximum velocity is restricted to . Note that all the experiments are conducted in a Windows XP Professional OS environment using Intel Core i5, 2.67 GHz, 2G RAM, and the codes are performed in Matlab 7.0.1.

Besides the common parameters, the parameter configurations for all variants employed are extracted from their optimized suggestions in the corresponding publications and are described in Table 3. For our HPSO-GSA, it is important to note that the evolution stagnation cycle is selected as 4 and the fixed iteration interval cycle Tf is set to 20 according to the analysis of parameter sensitivity observation.

4.3. Parameter Sensitivity Analysis

The key parameters Tf and have a direct effect on the performance of the HPSO-GSA. Hence, the experiments are performed to investigate the effect of different parameters on the proposed HPSO-GSA. For simplicity, the maximum number of iterations is set as 1000, and other parameters are the same as previously mentioned. Four well-known test functions, namely, the Sphere, Rosenbrock, Rastrigrin, and Griewank problems, have been employed to observe how the algorithm is affected by these parameters.

First, the parameter Tf of the proposed HPSO-GSA is tuned. Each test function has been tested on HPSO-GSA with different values of Tf; however, it is impossible to evaluate all the cases of the parameter. Hence, the eight selected values for this parameter Tf = 2, Tf = 5, Tf = 10, Tf = 15, Tf = 20, Tf = 30, Tf = 40, and Tf = 50 are considered, respectively. The results are averaged over 100 independent runs, and the normalized average best fitness values achieved for each parameter Tf are shown in Figure 3. Note that the normalized average best fitness is defined by means of the variable () calculated as follows:where i represents the group index of different parameters Tf (apparently, i = 1, 2, …, 8) and denotes the Sphere, Rosenbrock, Rastrigrin, and Griewank functions. is the average best fitness under the ith parameter Tf for function . and denote the minimum and maximum fitness under all of the cases for function , respectively. The aim of normalizing is to reduce the influence of different magnitudes in the same coordinate. From Figure 3, it is apparent that the searching capabilities of the HPSO-GSA are influenced by different parameter Tf. It is an interesting conclusion that too small or two large values of Tf tend to compromise the convergence accuracy of the HPSO-GSA. The best results are obtained when Tf is fixed at 20 for most of test functions. Hence, in our simulations, the HPSO-GSA uses the parameter Tf = 20.

Second, the parameter is similarly considered. The possible ten scenarios are tested when the range of this parameter is selected from [1, 10] by step size 1. The results obtained by the HPSO-GSA for test functions are given in Figure 4. It is clear that, for the Rosenbrock function, produces the best average best fitness. For other functions, the best results are obtained for . Hence, is a desirable choice for the proposed algorithm in the following experiments.

4.4. Effectiveness of Different Strategies

The proposed algorithm employs three strategies, namely, DRC, FIIC, and AESC. To evaluate the impact on the performance improvement incurred by each of these strategies, we investigate the performance of (1) HPSO-GSA without the DRC strategy and with the FIIC strategy (HPSO-GSA1), (2) HPSO-GSA without the DRC strategy and with the FIIC strategy (HPSO-GSA2), (3) HPSO-GSA with the DRC strategy and the FIIC strategy (HPSO-GSA3), (4) HPSO-GSA with the DRC strategy and the AESC strategy (HPSO-GSA4), and (5) the complete HPSO-GSA. For HPSO-GSA, the three strategies are integrated with the algorithm. In this experiment, seven test functions, except for function Schaffer, in 10 dimensions are considered. The maximum number of iterations is set to 500, and a total of 30 runs for each algorithm are conducted.

To assure a fair comparison, we compare the average best fitness fitaverage values obtained by HPSO-GSA1, HPSO-GSA2, HPSO-GSA3, HPSO-GSA4, and HPSO-GSA with those obtained by the PSO. The comparison results are expressed according to the percentage improvement (%improve) computed as follows [36]:where represents HPSO-GSA variants. If has better performance (that is smaller average fitness) than PSO, %improve is positive. Otherwise, it is negative. The results of fitaverage and %improve for all involved algorithms are given in Table 4.

From Table 4, all of HPSO-GSA variants have obtained performance improvement in comparison with the PSO, implying that the utilization of any strategies, namely, DRC, FIIC, and AESC, can contribute to improving the PSOs searching accuracy. Among all the HPSO-GSA variants, the HPSO-GSA shows best performance according to the largest average %improve, followed by the HPSO-GSA4, HPSO-GSA3, HPSO-GSA2, and HPSO-GSA1. This conclusion suggests that the integration of three strategies into the HPSO-GSA has significantly improved its optimizing performance. Hence, we use the HPSO-GSA for comparative study in the following section.

As shown in Table 4, HPSO-GSA3 and HPSO-GSA4 obtain higher average %improve than HPSO-GSA1 and HPSO-GSA2, especially in multimodal functions. This implies that the DRC strategy helps to escape from the local optima. On the other hand, HPSO-GSA2 and HPSO-GSA4 show better results against HPSO-GSA1 and HPSO-GSA3, respectively. This observation suggests that our AESC strategy performs better than the FIIC strategy. Finally, we observe some performance deteriorations in the functions fSchw and fRas for HPSO-GSA2 and HPSO-GSA3, as the %improve value of HPSO-GSA3 is lower than that of HPSO-GSA2. It indicates that the AESC strategy can effectively improve the searching accuracy in certain functions in the absence of having the DRC strategy. The AESC strategy is more likely to help the HPSO-GSA to enhance the performance in some cases.

4.5. Comparative Study

To assess the performance of the HPSO-GSA compared to seven other optimization algorithms, the simulation experiments based on different measures are presented. These measures provide the ability to evaluate algorithms from different points. First, the performance results of the HPSO-GSA compared to PSO, GSA, and other state-of-the-art hybrid variants like DEPSO, GAPSO, DE-GSA, GA-GSA, and PSOGSA are presented in Section 4.5.1. Following the experiments, the statistical comparisons among the involved algorithms are also given to determine whether improvements of the proposed method are significant, as shown in Section 4.5.2.

4.5.1. Performance Results

The simulation results for each test function are recorded in Tables 5 and 6 based on three evaluation criteria such as accuracy, reliability, and efficiency by means of the mean best fitness (mean), standard deviation (std. dev.), success rate (SR), and searching time (ST). In these tables, mean is defined as the average result of best fitness generated by each algorithm for each function in 100 independent trials. A low mean is desirable as it indicates the algorithm has better optimizing accuracy. Std. dev. measures the amount of variation from the average. A small std. dev. indicates an algorithm has good stability. SR represents the consistency of an algorithm to achieve a predefined convergence level among the maximum iterations. A larger SR implies that an algorithm is more reliable as it can consistently solve a problem with the accuracy level . Finally, the computational cost can be evaluated by the mean ST which represents the algorithm’s convergence speed with the predefined solution accuracy. The best results obtained by the algorithms are shown in bold for each metric.

From Table 5, we observe that HPSO-GSA has the lowest optimizing accuracy for solving all test functions except for functions quartic noise and Schwefel. Specifically, for the previous three unimodal functions, HPSO-GSA shows no much superior searching accuracy for function quartic noise, as the smallest mean values have been obtained by GA-GSA. The HPSO-GSA significantly outperforms other algorithms for function Rosenbrock. It implies that the proposed algorithm provides better exploration ability for avoiding the premature convergence, as the global optimum of this function is located in a narrow, long, and parabolic shaped flat valley. So it is often used to evaluate the ability of an algorithm in mitigating the stagnation problem. For the great majority of multimodal functions including complex hybrid composition, the HPSO-GSA surpasses all the other contenders as it has the smallest convergence accuracy. Hence, the HPSO-GSA can provide an appropriate level of global search escaping from many local optima.

Meanwhile, we present the SR and ST results produced by all the involved algorithms, as shown in Table 6, in order to compare the algorithm’s reliability and computational cost, respectively. First, we observe that the HPSO-GSA and DE-GSA have more superior searching reliability than their peers for all the problems, as it converges successfully to the acceptable accuracy level with success rate 1. It is mentioned that the PSO never converges to the criteria for test functions at the predefined level . The remaining algorithms, especially the DEPSO, GAPSO, GA-GSA, and PSOGSA, are able to partially solve all the test functions. As a consequence, the proposed algorithm provides appropriate balance between exploration and exploitation abilities, guaranteeing to converge towards the predefined criteria. Second, as for the searching times, it is clear from Table 6 that the involved algorithms show various ST values for each test function. For instance, the computational overload of the PSO is the lowest, as it has advantage of fast convergence with simple implementation and global searching guide, whereas the second lowest ST values are achieved by the HPSO-GSA for all the employed functions. This is because GSA operator is added to our algorithm, resulting in the smaller computational overload. Other algorithms such as GSA, DEPSO, GAPSO, DE-GSA, GA-GSA, and PSOGSA require higher computational times compared to PSO and HPSO-GSA. The excellent performance of the HPSO-GSA in terms of accuracy and success rate also confirms that the HPSO-GSA is more computationally efficient than other hybrid variants.

To further evaluate the algorithm’s convergence speed qualitatively, the convergence curves of all algorithms for all test functions are presented in Figure 5. The evolution tendency of an algorithm represents its convergence behavior and its convergence speed throughout the iterations. From Figure 5, the rapid convergence properties of HPSO-GSA are reflected by the convergence curves except for quartic noise and Schwefel functions; that is, HPSO-GSA requires the less computational time to converge to acceptable level with less iteration. To be specific, PSO and DEPSO do not reach the specified convergence level if the function evaluation numbers are extended for function sphere. For Rosenbrock function, all the algorithms except for PSO converge to reach the criterion. Moreover, the best results are obtained by HPSO-GSA. As for multimodal functions, DE-GSA and HPSO-GSA show similar convergence curves for function Rastrigin; however, HPSO-GSA achieves better optimization accuracy. Meanwhile, GSA, DEPSO, DE-GSA, GA-GSA, and HPSO-GSA can continuously optimize the function Ackley throughout the function evaluation numbers, though they only provide a slower convergence speed. It reveals that the convergence accuracy may improve if the evolutionary process is continued. It is worth mentioning that more than half of hybrid variants including GAPSO, DE-GSA, GA-GSA, PSOGSA, and HPSO-GSA exhibit better convergence characteristics by extending the evolutionary process. They can converge to the predefined level in the early or middle stages of optimization. This implies that the incorporation of different algorithms can enhance optimization ability of the original algorithm, as it provides more mechanisms to mitigate the stagnation problem. Generally, the HPSO-GSA provides more competitive performance in terms of global accuracy and success rate, compared to other hybrid variants.

4.5.2. Statistical Comparisons of Different Algorithms

To thoroughly compare the HPSO-GSA with its competitors, we perform a two-tailed Taillard test (t-test) [42] with 58 degrees of freedom at a 0.05 level of significance and the Wilcoxon test [43]. Results of t-test (T) achieved by all the involved algorithms based on mean best fitness and success rate are reported in Tables 7 and 8, respectively. Additionally, we rank (R) the algorithms from the smallest mean value to the largest one for each function. The algorithms that are not statistically different from each other are given the same rank. The corresponding ranking R values are also listed in Table 7. To obtain the overall performance, we summarize the T values among HPSO-GSA and other peers as “+/ = /−” in the last row of the table. The “+/ = /−” denotes the number of test functions that HPSO-GSA performs significantly better, almost the same as and considerably worse than its competitors, respectively. Meanwhile, the overall average ranks over the number of test functions and the order of the average ranks are listed in Table 7.

From Table 7, we observe that the number of test functions where HPSO-GSA performs considerably better than its contenders (T = “+”) is much larger than the number of test functions where HPSO-GSA obtains significantly worse results than its peers (T = “−”). Then, the best searching accuracy of HPSO-GSA among 8 algorithms is further validated by the t-test results. Particularly, the HPSO-GSA significantly surpasses all of its peers for functions Sphere, Rosenbrock, Rastrigin, Ackley, Griewank, Penalized1, Penalized2, and Schaffer. Moreover, the t-test results indicate that HPSO-GSA is statistically different in all the compared algorithms including PSO, GSA, DEPSO, and GAPSO. Also, based on the average ranks, the algorithms are sorted into the following order: HPSO-GSA, PSOGSA, DE-GSA, GA-GSA, GAPSO, DEPSO, GSA, and PSO. Due to the total and average ranks of HPSO-GSA are smaller than those of other algorithms, HPSO-GSA obtained a better overall performance than all other algorithms. Moreover, the analysis indicates that HPSO-GSA (with both the private thinking parts of the PSO and sequential mode) performs better than PSOGSA (without the personal thinking part of the PSO).

It is apparent from Table 8 that the number of test functions where HPSO-GSA performs considerably better than and almost the same as its contenders (T = “+” and “=”) is much larger than the number of test functions where HPSO-GSA obtains significantly worse results than its peers (T = “−”). It reveals that HPSO-GSA is statistically different from GAPSO, DE-GSA, GA-GSA, and PSOGSA. If we increase the convergence level in the experiments, the “+” values in this table would increase and HPSO-GSA exhibits more excellent performance compared to other algorithms.

To compare the performance difference between HPSO-GSA and the other nine algorithms, we also conduct a Wilcoxon signed-rank test. Table 9 shows the resultant values when comparing HPSO-GSA with other algorithms. The values below 0.05 are shown in bold. The results show that HPSO-GSA is significantly better than other algorithms except for DE-GSA and PSOGSA. However, HPSO-GSA significantly outperforms DE-GSA and PSOGSA according to the searching accuracy in Table 5 and the average ranks in Table 7.

Based on the aforementioned performance evaluation and statistical results, we conclude that the proposed hybrid algorithm performs better overall in the involved test functions compared to PSO, GSA, DEPSO, GAPSO, DE-GSA, GA-GSA, and PSOGSA.

5. Conclusion

In this paper, a novel hybridization approach of PSO and GSA through sequential pattern, namely, HPSO-GSA, which consists of three learning strategies, dependent random coefficients, fixed iteration interval cycle, and adaptive evolution stagnation cycle, is proposed to solve the global optimization problems. The employment of the dependent random coefficients enhances the better balance between global and local searches, as it improves the diversity of the swarm. The fixed iteration interval cycle and adaptive evolution stagnation cycle is proposed for seamlessly integrating PSO and GSA in a less computational cost, thereby enhancing the algorithm’s convergence speed. Meanwhile, the GSA operators encourage exploration and thus improve the premature stagnation problem. The experimental studies were performed to assess the performance of the proposed HPSO-GSA for solving benchmark test functions, as well as the impact of each employed strategy on the performance of the algorithm. The results indicate that the HPSO-GSA achieves better performance than its contenders investigated in this paper in terms of searching accuracy, algorithm reliability, and computational cost. Hence, the HPSO-GSA is a promising alternative solution to optimization problem. Possible future work includes extending the application of HPSO-GSA in the real-world optimization problem. In addition, we will investigate the suitable hybridization strategies to alleviate the stagnation tendency of HPSO-GSA.

Data Availability

The data used to support the findings of this study are available from the corresponding author upon request.

Conflicts of Interest

The authors declare that there are no conflicts of interest regarding the publication of this paper.

Acknowledgments

This work was supported by the National Natural Science Foundation of China under Grant no. 61572238, Anhui Provincial Natural Science Foundation under Grant nos. 1608085QF157 and 1708085ME132, and Key Project of Natural Science Research of Anhui Provincial Department of Education under Grant no. KJ2016A431.