Computational Intelligence and Neuroscience

Computational Intelligence and Neuroscience / 2021 / Article

Research Article | Open Access

Volume 2021 |Article ID 6686826 | https://doi.org/10.1155/2021/6686826

Ying-Hui Jia, Jun Qiu, Zhuang-Zhuang Ma, Fang-Fang Li, "A Novel Crow Swarm Optimization Algorithm (CSO) Coupling Particle Swarm Optimization (PSO) and Crow Search Algorithm (CSA)", Computational Intelligence and Neuroscience, vol. 2021, Article ID 6686826, 14 pages, 2021. https://doi.org/10.1155/2021/6686826

A Novel Crow Swarm Optimization Algorithm (CSO) Coupling Particle Swarm Optimization (PSO) and Crow Search Algorithm (CSA)

Academic Editor: José Alfredo Hernández-Pérez
Received09 Dec 2020
Revised09 Apr 2021
Accepted08 May 2021
Published22 May 2021

Abstract

The balance between exploitation and exploration essentially determines the performance of a population-based optimization algorithm, which is also a big challenge in algorithm design. Particle swarm optimization (PSO) has strong ability in exploitation, but is relatively weak in exploration, while crow search algorithm (CSA) is characterized by simplicity and more randomness. This study proposes a new crow swarm optimization algorithm coupling PSO and CSA, which provides the individuals the possibility of exploring the unknown regions under the guidance of another random individual. The proposed CSO algorithm is tested on several benchmark functions, including both unimodal and multimodal problems with different variable dimensions. The performance of the proposed CSO is evaluated by the optimization efficiency, the global search ability, and the robustness to parameter settings, all of which are improved to a great extent compared with either PSO and CSA, as the proposed CSO combines the advantages of PSO in exploitation and that of CSA in exploration, especially for complex high-dimensional problems.

1. Introduction

Bio-inspired optimization algorithms have become increasingly popular over the past decade due to their simplicity of implementation, robustness, and ability of parallel computation [1]. Although the specific principles and procedures of bio-inspired optimization algorithms are various, it is a consensus that an effective search technique must strike a balance between exploring new regions in the search space and exploiting known promising regions [2, 3].

Particle swarm optimization (PSO), developed by Kennedy [4] in 1995, is one of the most popular bio-inspired algorithms with wide applications in industrial design [5], energy distribution [6], economic dispatch [7], and so on. PSO is inspired by the social behavior of bird flocking or fish schooling. A swarm of particles moves inside a bounded search space and cooperates to identify the best solution under the guidance of social attraction and cognitive attraction, which are used with the aim of exploiting and controlling the cooperation within the swarm. Meanwhile, the exploration of the search space is considered by an inertia factor weighing the movement of particles and the magnitude of their velocities. Nevertheless, similar to other bio-inspired algorithms, PSO also suffers from the bane of premature convergence and entrapment in local optima when solving complex multimodal problems [8, 9], i.e., the algorithm has difficulty exploring all regions, and some of the peaks are easily missed. Various improvements to PSO have been made to balance exploration and exploitation search.

Crow search algorithm (CSA), proposed by Askarzadeh [10] in 2016, is a newly developed algorithm inspired by the strategic behavior of crows while searching food, thievery, and chasing behavior. CSA is characterized by easy implementation, less parameter setting, and relatively strong development capacity in the searching process [11]. However, CSA also suffers from low search precision, high possibility of getting into the local optimum, and premature convergence, especially for multidimensional optimization problems [11], which may result from two important features in the basic CSA [12, 13]: (1) there is no criterion for choosing the destination and the selection is done randomly between all crows; (2) the amount of flight length is a constant value which may cause inappropriate searching by the crows in the solution space that results in trapping in the local optimum. In the past few years, most of the recent research studies focused on the application of the algorithm in different scientific fields with suitable modification. In case of feature selection, Sayed et al. [3] combined chaos with CSA to enhance the performance and convergence speed, while Chaudhuri and Sahu [14] proposed time varying flight length to obtain better results. As for energy optimization, Makhdoom and Askarzadeh [15] incorporated adaptive chaotic awareness probability into CSA to optimize operation of photovoltaic/diesel hybrid energy system. To overcome unbalanced exploration and exploitation phases, Shekhawat and Saxena [13] improved the algorithm by a cosine function and opposition-based learning concept. CSA variants have been categorized into modified versions [1618] and hybrid versions [1921].

Both PSO and CSA are population-based techniques. For optimization problems, the most critical part is to find as many local optimal solutions as possible and gradually move to the global best. Therefore, the most important thing is to give the solutions some degree of freedom during iteration while making the optimization process efficient, i.e., to balance between exploration and exploitation. It can be inferred from the previous studies that PSO is more capable in controlling exploitation by social and cognitive factors, but its ability of exploration in the unknown region is highly influenced by the current best solutions in spite of the existence of random numbers affecting the acceleration coefficients. In contrast, the individual in CSA moves completely randomly when it is aware of being tracked to get rid of the stalker. Hence, CSA performs better in exploring new regions in the search space. A few studies have proposed hybrid models combining PSO and CSA. Babu et al. [22] have proposed a model that connects the update procedure of CSA with that of PSO, i.e., every particle updates its position twice for each iteration. Crow Search Algorithm Auto-Drive PSO [23] uses CSA as the outer algorithm to optimize the sizing of renewable distributed generations; then, PSO is applied for the optimal power flow of the power distribution system as inner optimization. Both algorithms execute the position updating procedure using formulas from PSO and CSA during each iteration. In [22], it is used to address the same problem, while Farh et al. [23] divide a problem into two parts which are optimized by CSA and PSO, respectively. However, their algorithms are applied on specific situations without detailed evaluation on further performance. To take advantage of the ability of CSA in exploration and the ability of PSO in exploitation, this paper proposed a new Crow Swarm Optimization algorithm (CSO). Besides, moving towards the (current) best particle in the swarm and the best position is autonomously found so far, and each individual also has a probability to stalk the best ever solution of another individual. The proposed CSO essentially adds the information sharing mechanism in CSA and explores the unknown region between the two individuals using PSO. Numerical benchmark problems are tested using the proposed algorithm, and the results prove that it performs better in balancing exploitation and exploration with a few parameters.

The rest of the paper is organized as follows. Section 2 introduces the proposed CSO algorithm as well as traditional PSO and CSA. Section 3 is the performance evaluation on several benchmark functions with an example of application in real-world problem. Section 4 draws the conclusions.

2. Crow Swarm Optimization Algorithm (CSO)

2.1. Standard PSO

PSO imitates the foraging behavior of bird flocks and consists of a collection of agents, so-called particles. The position of each particle represents a solution, and each particle retains the information of its current position, velocity, and its personal best found position within the search space. The particles move to new positions with a velocity vector, which is iteratively updated based on its attraction towards the best position found by the particle and the best position found by any particle within the search space, i.e., the position of particle i at the tth iteration is determined by the position vector and the velocity vector . The updating formulas for particle velocity and position are shown in Equations (1) and (2):where is the number of the particles in population, is the inertia weight coefficient, and are called acceleration coefficients, indicating the cognition degree of the particle to the individual and the society, respectively, are random numbers in [0, 1], represents the current optimal position of particle, represents the best position of the current population, and are the lower and upper limit of particle update velocity, respectively; in this paper, . and are the minimum and maximum positions of particles, respectively.

The key idea of PSO is that, under the dominant action of better particles, each particle is close to the global optimal position. Although there exist random numbers and to avoid that the movements of the particles are completely decided by its current optimal position and the best position of the current population, it can be inferred that these two factors have great impact on the movement of particles. Thus, the initial position of the particles affects the search process greatly, and a large part of unknown regions can be hardly paid attention to. Such characteristic provides a strong capacity of exploiting the known information, while it is weaker in the exploration of new regions, which lead to premature convergence, i.e., the optimization easily converges to a local optimum.

2.2. Standard CSA

CSA is a new search algorithm inspired by the behavior of crows hiding and stealing food. It is nongreedy and can increase the variety of generated solutions. The principles of CSA are listed as follows: (1) crows live in the form of flock, (2) each crow can memorize the position of their hiding places and steal food from other crows, and (3) with a certain probability, the crow can be aware of being stalked and then protect its food from being stolen by flying randomly. The position of each crow represents a solution . During the iteration, crow tracks a random crow . If crow is not aware of being tracked (i.e., ), crow approaches the hiding place () of crow ; while if crow knows that crow is chasing it, crow fools crow by going to a random location in the search space to protect its hiding place from detection. The mathematical expression iswhere and are random numbers with uniform distribution between 0 and 1, is the length of a crow’s flight, is the perceptual probability of crow , and is the location where the current crow stores food, which is equivalent to the historical optimal solution the current crow has found.

CSA is a population-based optimization algorithm that is fairly simple with only two adjustable parameters (flight length and perceived probability ), which in turn makes it very attractive for different engineering applications. In CSA, the parameters of perceived probability are directly used to control the diversity of the algorithm. Compared with genetic algorithm (GA), PSO, and harmony search algorithm (HS), CSA has fewer parameters to adjust and is easier to implement. Moreover, the individual in CSA has the possibility of reaching a total random position, and thus, it has stronger ability of exploring newly unknown regions. However, CSA has lack of the criterion for choosing the destination and the selection is done randomly between crows; besides, the flight length is a constant value. Such characteristics of CSA lead to a weaker ability of exploitation of current information compared with PSO, resulting in low search precision, high possibility of getting into the local optimum, and premature convergence, especially for multidimensional optimization problems.

2.3. Proposed CSO

To take advantage of both the PSO in exploitation and CSA in exploration, this study proposes a new crow swarm optimization algorithm (CSO). The movement of particles in CSO is influenced by the best position found so far by the particle itself and the best position identified so far by the whole swarm, as in PSO. Meanwhile, the particle keeps an eye on each other, i.e., there exists a possibility that its movement is determined by the best position identified by the whole swarm and the best solution found so far by another particle; in CSA, it tracks the position where another crow hides the food. The formula updating the flying velocity of crows in CSO iswhere represents the degree of the influence of individual j on individual i and is a random number within [0, 1]. The updating velocity in equation (4) is also limited by and .

When , the individual i decided to track individual j, and the velocity of individual is affected by inertia velocity, global optimal solution, and current optimal solution of individual . Otherwise, the velocity of individual is composed of inertia velocity, global optimal solution, and local optimal solution of individual . When the velocity of individual is obtained, the position of individual in the next iteration is calculated by equation (2).

Figure 1 shows the flowchart of the proposed CSO. It can be seen from Figure 1 that the difference between proposed CSO with other two algorithms mainly comes from the method of updating the particle speed and position. PSO pays more attention to the optimization efficiency and aims to be closer to the current optimal solution in the iterative process, thus resulting in a strong ability of exploiting the current known information, while CSA gives greater freedom to the algorithm to ensure the diversity of solutions, which produces greater ability of exploring unknown regions. The proposed CSO combines the advantages of both the PSO and the CSA to reach a better balance between increasing randomness and improving efficiency, i.e., between exploration and exploitation.

2.4. Individual Movement

Individual movement directly affects the performance of the swarm intelligence algorithm. Figure 2 shows the schematics of how an individual updates its position in (a) PSO, (b) CSA, and (c) CSO. The movement of the individuals in PSO is rather fixed and determined by its inertia, its current best position, and the best position identified by the whole swarm. Both CSA and CSO provide alternatives with a certain probability to better maintain the diversity of solutions. However, it can be seen that CSA is less efficient compared to CSO due to the divergence of the way the solutions update. CSO preserves the optimization efficiency of PSO with a possibility of exploring larger regions. In CSO, the larger the value of the probability parameter is, the stronger directional characteristic it has. When  = 1, CSO degenerates into standard PSO; while when  = 0, the individuals in CSO always randomly select the historical optimal solution of an individual in the population to track with less exploitation of other known information. It should be illustrated that when is equal to 0, CSO does not degenerate into CSA, but it retains the characteristics of rich diversity of CSA.

3. Experiments and Discussion

3.1. Standard Benchmark Functions

Extensive experiments are conducted on a set of well-known benchmark functions to ascertain the performance of the proposed CSO, including the global optimization problems shown in Table 1. The experimental settings are as follows: the maximum iteration number is 1000, and the swarm size is set to be 100; the parameters in equations (1) to (4) are set to be c1 = c2 = c3 = 2, ,  = 0.2,  = 2, and  = [2]D.


Benchmark functionsFormulaDimensionUp boundaryLower boundaryOptima

Sphere10100−1000
Schwefel’s problem 1.210100−1000
Rosenbrock102−20
Rastrigin’s105.12−5.120
Ackely1032−320
Michalewicz100−9.66015
Branin
215−50.397887
Griewank10600−6000
Normalised paraboloid10100
Step10100−1000
Six hump camel bsck25−5−1.031628
CEC2011 problem1CEC2011 problem1: parameter estimation for frequency-modulated (FM) sound waves [24]66.35−6.40
CEC2011 problem2Lennard-Jones potential problem [24]15[4, 4, pi, …, ][0, 0, 0, …, ]

In Table 1, D indicates the dimension of the problem and represents the value of the decision vector in dimension i.

These test functions have included both unimodal functions (such as Sphere, Schwefel’s problem 1.2, Step, and Rosenbrock) and multimodal functions (such as Ackely and Griewank). Unimodal functions with one global optimum and no local optima are used to investigate the exploitation level and the convergence rate of the algorithms, while multimodal benchmark functions with multiple local optima are used to test the ability of algorithms to avoid entrapment in local optima and explore new unknown regions.

In addition, a parameter representing the improvement efficiency is introduced to evaluate the progress of CSO compared with standard PSO or CSA:

3.2. Optimization Efficiency

Table 2 shows the optimization results of the three algorithms for different target functions, respectively. In addition to the above-described three methods, recently, well-accepted bio-inspired optimization algorithms MBO and MS are also examined to better illustrate the performance of CSO. Each test conducts 25 trials to eliminate the influence of the initial population and enhance the reliability of the optimization results. Ten out of thirteen test functions show that CSO finds the best optimum values among compared to other algorithms. However, the performance of CSO on Rastrigin’s function, Michalewicz function, and Griewank function is inferior to that of MS or MBO. For Rosenbrock function and CEC 2011 problem of parameter estimation for frequency-modulated (FM) sound waves, CSO has its best performance extremely close to 0, which is never reached by neither MS nor MBO. However, how to maintain this potential in each run still needs further research. It can be seen from the data that, in most cases, CSO performs better than both PSO and CSA, with not only a smaller value of the optimum but also less standard deviation. The results indicate that CSO has stronger optimization ability with higher stability. For some benchmark problems, the minimum value obtained by CSO is even different in magnitude from both PSO and CSA, indicating that the optimization performance of CSO has been greatly improved. CSA performs better in specific problems where the feasible region is relatively small, such as CEC2011 problem2. However, when the feasible range is in the hundreds of orders of magnitude, CSO’s performance can degrade dramatically. Although we tried to change the parameters of the algorithm, it was time-consuming and often difficult to obtain the desired results.


FunctionAlgorithmOptimumMeanStandard

SphereMBO2.64E − 141.11E − 102.23E − 10
MS2.97E − 211.12E − 181.47E − 18
CSA3.21E + 011.63E + 028.38E + 01
PSO3.10E − 148.72E − 032.95E − 02
CSO1.75 E − 244.27 E − 235.69 E − 23

Schwefel’sMBO1.62E − 082.40E + 032.75E + 03
MS5.45E − 201.91E − 182.66E − 18
CSA1.27E + 032.27E + 036.48E + 02
PSO4.55E − 059.96E − 012.10E + 00
CSO2.46 E − 249.28 E − 231.22 E − 22

RosenbrockMBO1.63E − 021.64E + 001.81E + 00
MS3.28E + 007.99 E − 011.04E + 00
CSA3.09E − 018.27E − 014.20 E − 01
PSO1.98E − 056.31E + 001.39E + 01
CSO2.50 E − 261.79E + 012.77E + 01

AckleyMBO6.53E − 081.23E − 051.37E − 05
MS1.16E − 119.98E − 101.16E − 09
CSA3.50E + 005.82E + 001.29E + 00
PSO4.04E − 032.59E + 001.29E + 00
CSO7.39 E − 132.90 E − 122.23 E − 12

Rastrigin’sMBO2.82E − 128.25E − 011.48E + 00
MS0.00 E + 001.42 E − 164.82 E − 16
CSA1.68E + 012.86E + 015.72E + 00
PSO4.08E + 001.84E + 019.29E + 00
CSO3.98E + 001.46E + 019.18E + 00

MichalewiczMBO−9.38E009.10 E + 001.41 E − 01
MS−9.58E + 00−8.69E + 005.44E − 01
CSA−7.14E + 00−6.31E + 004.16E − 01
PSO−9.12E + 007.74 E + 008.73E − 01
CSO−9.29E + 00−7.64E + 009.12E − 01

BraninMBO3.98 E − 014.02E − 011.25E − 02
MS3.98 E − 013.98 E − 011.11 E − 16
CSA3.98 E − 013.98 E − 011.91E − 08
PSO3.98 E − 013.98 E − 011.11 E − 16
CSO3.98 E − 013.98 E − 011.11 E − 16

GriewankMBO6.66E − 152.61E + 005.22E + 00
MS0.00 E + 000.00 E + 000.00 E + 00
CSA1.37E + 002.52E + 007.30E − 01
PSO6.16E − 022.75E − 011.47E − 01
CSO2.96E − 021.29E − 011.03E − 01

Normalised paraboloidMBO3.23E − 154.28E − 117.50E − 11
MS1.39E − 149.55E − 146.63E − 14
CSA3.11E − 042.00E − 031.34E − 03
PSO1.27E − 183.00E − 081.21E − 07
CSO4.92 E − 306.13 E − 281.98 E − 27

StepMBO0.00 E + 000.00 E + 000.00 E + 00
MS0.00 E + 000.00 E + 000.00 E + 00
CSA1.80E + 011.72E + 021.08E + 02
PSO0.00 E + 003.56E + 002.98E + 00
CSO0.00 E + 000.00 E + 000.00 E + 00

Six hump camel bsckMBO1.17E − 161.70E − 134.04E − 13
MS5.13E − 294.00E − 248.99E − 24
CSA1.03 E + 001.03 E + 004.46E − 05
PSO1.03 E + 001.03 E + 000.00 E + 00
CSO1.03 E + 001.03 E + 002.09E − 07

CEC2011 problem1MBO1.18E + 011.95E + 013.55E + 00
MS1.67E − 021.91E + 016.20E + 00
CSA1.60E + 012.06E + 012.53 E + 00
PSO8.42E + 001.87E + 014.82E + 00
CSO1.27 E − 211.14 E + 018.68E + 00

CEC2011 problem2MBO−8.97E + 00−7.85E + 008.96E − 01
MS9.10 E + 008.66 E + 007.03E − 01
CSA−8.92E + 00−8.06E + 004.57 E − 01
PSO9.10 E + 00−5.81E + 002.01E + 00
CSO9.10 E + 00−6.24E + 002.17E + 00

Note. The bold fonts indicate the algorithm performs best for the corresponding problem.

Figure 3 shows the variation of the optimums with the increase of the variables dimensions for (a) the unimodal problem of sphere function and (b) step function, as well as (c) the multimodal problem of Rastrigin’s function and (d) Ackley function under other fixed conditions. It can be found from Figure 3 that CSO is able to maintain a relative high exploitation level for unimodal function with high dimensions. To improve the quality of the solution for high-dimensional problems, the size of the population is usually enlarged, which leads to a great increase of the calculation cost. Figure 3(a) indicates that, for a 10D problem, the optimum resulting from CSO still remains in the order of 10−23, while those from CSA and PSO rise to 102 and 10−3, respectively. Moreover, CSO also has the shortest error bars, which represent the stability of the algorithm. Figure 3 demonstrates that CSO performs superiorly for high-dimensional unimodal problems.

The improvement made by CSO on the multimodal problems varies with different problems. For Ackley function, Table 1 shows that the optimization efficiency of CSO is still significantly better than that of CSA and PSO. The three algorithms perform the worst when solving Rastrigin’s problem, in which case, CSO is still the best among them. Table 1 also indicates that the improvement efficiency of CSO is as high as 0.96 and 0.26 compared with CSA and PSO, respectively. Figure 3(c) shows the optimization results of Rastrigin’s function with different dimensions. It is clear that the optimization results degrade significantly as the dimension increases. Nevertheless, CSO has the weakest degradation among the three algorithms. The same trend could also be observed in Figure 3(d).

3.3. Global Search Abilities

Figure 4 shows the evolution of the global optimum in the optimization process of the three algorithms. As suggested in Figure 4(a), PSO converges earliest in the iteration, and there is almost no improvement after 100 iterations. However, for CSA and CSO, improvements can still be observed even at the end of the iteration. Such phenomenon indicates that PSO is prone to premature convergence, while CSA and CSO still preserve the diversity of solutions to a certain extent during the optimization process, i.e., have a stronger ability of exploring new regions in the search space. It can be seen from Figure 4(a) that although CSA can avoid being trapped in local optima, and its ability to find a better solution is weak. In the later iteration, the solution obtained by CSO is smaller compared to CSA. To better investigate the optimization process, a concept of turning point is proposed here. When the optimum of a certain generation is better than that of the previous generation with a certain amplitude (in this study, it refers to the fourth decimal point or above of the objective value being improved), the current generation is defined as a turning point. Table 3 shows the statistics of turning points for the three algorithms in 25 trials. For Rastrigin’s function, the average number of the turning points of PSO is much higher than that of CSA and CSO, but most of them are centralized in the first 100 iterations and the change for each turn is small. As for CSA and CSO, the last turning point is over 800, verifying that both CSA and CSO have a better ability to jump out of the local optimal solution, i.e., the ability of exploration. Since CSA gives too much freedom to the algorithm, the amount of turning point is least, indicating its low efficiency. CSO possesses a moderate number of turn with relative high change, which makes it a more balanced algorithm.


functionParameterAverage number of turning pointsIteration time of the last turning pointAveraged optimal valueAveraged value change for each turn

Rastrigin’sPSO8611219.070.84
CSA1682328.926.58
CSO3683911.723.08

AckleyPSO73932.640.19
CSA329226.180.46
CSO833652.58e-120.20

Figure 4(b) shows the optimization process of the three algorithms for Ackely function. For PSO, there are several times when the optimum is trapped around 2. The optimization process of CSA looks like a staircase and improved every several iterations. It can be inferred that a certain number of iterations is still in need to get to 0. The best results are obtained from CSO, for which the optimums of all trails are quite close to 0. The calculated turning point in Table 3 shows that, after about 365 iterations, CSO almost reaches the global optimum value. The purple line (CSO) first sandwiched between the yellow line (PSO) and the blue line (CSA), and then, it passes these two lines coming to the bottom of the diagram in both Figures 4(a) and 4(b). CSA introduces an antitrack mechanism so that more randomness is given to the algorithm, which greatly reduces the possibility of entrapment in a local optimum at the expense of efficiency [13]. PSO employs local and global optimal solutions to direct the optimization process, so it has higher optimization efficiency but also higher probability of getting stuck. Based on the above results, it can be concluded that CSO absorbs the advantages of PSO and CSA while overcoming the shortcomings of both, so it is able to approach the global optimal solution quickly while maintaining a certain diversity of solutions.

3.4. Robustness to Parameter Settings

Table 4 is the optimization results of two unimodal functions, indicating the impact of the setting of parameter on the algorithm performance with other parameters unchanged. Given that ∈ [0, 1], when takes the values of the two endpoints, the optimization result is always the worst. For  = 1, the result of CSO is close to that of PSO. For unimodal problems such as sphere functions, the performance of CSA deteriorates with the increase of , while for Schwefel’s problem 1.2, the results of CSA is unstable and do not have obvious rule; thus, the tunning of AP is a big challenge.


Sphere function
00.20.40.60.81

CSO1.29E 3182.44E 3232.85E 3318.62E 3425.22E 3544.52E 3
CSA122.51202.91318.31408.18881.133029.16

PSO1.42E 33

Schwefel’s problem 1.2 function
00.20.40.60.81

CSO1.44E 3183.91E 3237.77E 3311.94E 3415.88E 3548.28E 301
CSA3452.622900.352986.592531.652660.233.23E + 03

PSO1.74

Figure 5 shows the impact of on solving more complex multimodal problems. It can be seen that the performance of CSO is not stable when is close to endpoints. For the case  = 1, CSO degenerates into PSO. For CSA algorithm, when  = 2, the optimal value of is concentrated between 0 and 0.2 [10]. If is too large, the optimization ability of CSA becomes quite weak. When the gets closer to 1, CSO gradually approaches PSO, and the algorithm can be easily trapped in a local optimum. Both Table 4 and Figure 5 indicate that the value of should be far from the two endpoints for CSO.

The impact of on the performance of CSO and CSA is discussed above given that  = 2. fl is another important parameter for CSA algorithm. The combination of these two parameters together determines the optimization capability of CSA. Askarzadeh [10] has mentioned that the parameters of and played an essential role in the CSA search process as the solution obtained by the right parameter was far superior to the one using the improper value. For different optimization problems, it is necessary to choose among different parameters to get the reasonable optimal solution.

Figure 6 compares the results obtained from CSA and CSO under different parameter combinations. In the previous discussion, it has been realized that a large would greatly undermine the optimization ability of CSA. Therefore, the upper bound of is set to 0.5. Askarzadeh [10] also mentioned that the best choice of parameter may be around 2. Thus, the range of is set as [1.25, 2.75]. Figure 6(a) shows the optimization result of sphere function with dimension of 6, and Figure 6(b) is that of Rastrigin’s function with dimension of 4. The upper surface of the figures represents the running result of CSA algorithm, while the lower surface is the optimized result of CSO. As parameters and play the similar role in the optimization process as they both restrict the updating velocity of the solution, the parameter (assuming has the same value in each dimension, so is treated as a scalar) in CSO is taken as . As can be explored from Figure 6, the upper surface is always steeper than the lower surface, indicating that the performance of CSA greatly depends on the setting of parameters. For the sphere function, of all parameter combinations applied in this study, the worst result (1.61) is 10 times worse than the best result (0.15). However, for CSO, all parameter combinations reach extremely close to 0, making almost a flat surface, i.e., CSO has stronger robustness to parameter settings compared with CSA. For multipeak Rastrigin’s function, the standard deviations for CSA and CSO are 1.02 and 0.25, respectively, which also proves that CSO greatly reduces the dependency on parameters.

3.5. Application in Real-World Problem

Optimization algorithm is often used in the optimal operation of reservoir. This section describes the ability of the CSO algorithm to solve real-world problem. It is a short-term scheduling of a hydrothermal power system described in CEC 2011 competition. The optimal goal is to meet the load demand and reduce the cost in accordance with the various constraints put on the hydraulic systems and the power system networks. Hourly discharge management for four hydro units requires 96 variables, i.e.,  = 96.

The objective function for this problem is to minimize the cost of thermal power plant. The operation is subjected to several limitations, including demand constraints, thermal generator constraints, hydro generator constraints, storage capacity constraints, and reservoir limit constraints. These constraints are added in the fitness function as penalty term. The detailed description and data could be found in [24].

Each algorithm was run for 25 times, and the average value of the optimal solution for each run was calculated to get the results, as shown in Figure 7. From the figure, we can see PSO performs badly since several penalties being relatively large, leading to the worst total objective value. MBO occupies the second largest circle, indicating that its performance is slightly better than that of PSO. As for CSA, the total objective value is small compared to PSO, while it violates the requirements of the discharge to a large extent. The best result comes from MS, which takes up the smallest area. CSO also meets almost every requirement with slight penalty. Table 5 shows the optimal and average values of 25 runs. The best total objective value comes from CSO and MS, which reduced by 57.6%, 70.9%, and 85.5% compared to CSA, PSO, and MBO, respectively. Optimized total cost does not make huge differences, and the penalty term is deterministic factor that influence the optimization results. Both CSO and MS could make every penalty equal or close to 0. Thus, it is believed that CSO has the same strength when tackling this problem, but sometimes it is trapped in local optima, which needs further improvements. As a conclusion, the operation scheme obtained by CSO could be a powerful alternative. Combined with above analysis, CSO may also be strong in real-world problems since it is explorative and efficient.


AlgorithmTotal objective valueTotal costTotal penalty
MinimumMeanMinimumMeanMinimumMean

CSA2.22E + 067.96E + 069.42E + 059.53E + 051.28E + 067.00E + 06
PSO3.23E + 063.79E + 079.46E + 059.58E + 052.26E + 063.28E + 07
CSO9.41E + 055.41E + 069.41E + 059.57E + 0518.94.46E + 06
MBO6.48E + 062.10E + 079.43E + 059.58E + 055.52E + 062.01E + 07
MS9.41E + 051.47E + 069.41E + 059.52E + 056.115.21E + 05

4. Conclusion

The balance between exploitation and exploration essentially determines the performance of a population-based optimization algorithm, which is also a big challenge in algorithm design. PSO is a popular bio-inspired optimization algorithm with strong ability in exploiting the existing information, i.e., every movement of the particles is under a certain guidance resulting from the previous knowledge, which leads to the weakening of the ability of exploring. CSA is a recent proposed algorithm with characteristics of simplicity, less parameters, and more randomness. The probability of flying totally randomly makes CSA has stronger ability of exploring new regions in the search space, while also results in a large computational cost and low search precision. This study proposes a new crow swarm optimization algorithm coupling PSO and CSA, which provides the individuals the possibility of exploring the unknown regions under the guidance of another random individual. The proposed CSO algorithm is tested on several benchmark functions, including both unimodal and multimodal problems with different variable dimensions. An example of its application in hydrothermal scheduling problem is also presented. The performance of the proposed CSO, the standard PSO, and CSA is evaluated by the optimization efficiency, the global search abilities, and the robustness to parameter settings. The results verify that the proposed CSO has better optimization ability compared with PSO and CSA, and it is less likely to fall into the local optimal solution than PSO algorithm and more efficient than CSA algorithm, i.e., CSO is good at approaching the global optimum quickly while maintaining a certain diversity of solutions during iteration. It can be concluded that the proposed PSO combines the advantages of PSO in exploitation and that of CSA in exploration. In addition, CSO has stronger robustness to the parameter and than CSA, which was regarded a major drawback of CSA previously. Although the proposed algorithm performs better than traditional PSO and CSO, it does sometimes become inferior to other algorithms. CSO, in this paper, is based on original PSO, but it could be applied in other modified version of PSO instead of traditional ones for possible further improvements. It is impossible for this algorithm to beat all the other algorithms, but it can be improved further, such as setting adaptive parameters, including subswarms. In addition to above algorithms, many other representative computational intelligence algorithms can be used to solve the problems, such as Harris hawks optimization (HHO) [25], which are not fully discussed in this paper. The source code of the proposed CSO algorithm is provided in this study, and more application and tests of CSO is welcome.

Data Availability

The data used to support the findings of the study are available from the corresponding author upon request.

Conflicts of Interest

The authors declare that there are no conflicts of interest regarding the publication of this paper.

Acknowledgments

This research was supported by National Natural Science Foundation of China (Grant nos. 91847302, 51879137, and 51979276).

References

  1. J. Del Ser, E. Osaba, D. Molina et al., “Bio-inspired computation: where we stand and what’s next,” Swarm and Evolutionary Computation, vol. 48, pp. 220–250, 2019, in English. View at: Publisher Site | Google Scholar
  2. K. R. Harrison, A. P. Engelbrecht, and B. M. Ombuki-Berman, “Optimal parameter regions and the time-dependence of control parameter values for the particle swarm optimization algorithm,” Swarm and Evolutionary Computation, vol. 41, pp. 20–35, 2018, in English. View at: Publisher Site | Google Scholar
  3. G. I. Sayed, A. E. Hassanien, and A. T. Azar, “Feature selection via a novel chaotic crow search algorithm,” Neural Computing and Applications, vol. 31, no. 1, pp. 171–188, 2019, in English. View at: Publisher Site | Google Scholar
  4. E. R. Kennedy, “Particle swarm optimization,” in Proceedings of ICNN’95-International Conference on Neural Networks, Perth, WA, Australia, November 1995. View at: Publisher Site | Google Scholar
  5. T. Jiang, J. Li, and K. Huang, “Longitudinal parameter identification of a small unmanned aerial vehicle based on modified particle swarm optimization,” Chinese Journal of Aeronautics, vol. 28, no. 3, pp. 865–873, 2015, in English. View at: Publisher Site | Google Scholar
  6. A. Khare and S. Rangnekar, “A review of particle swarm optimization and its applications in Solar Photovoltaic system,” Applied Soft Computing, vol. 13, no. 5, pp. 2997–3006, 2013, in English. View at: Publisher Site | Google Scholar
  7. A. Mahor, V. Prasad, and S. Rangnekar, “Economic dispatch using particle swarm optimization: a review,” Renewable and Sustainable Energy Reviews, vol. 13, no. 8, pp. 2134–2141, 2009, in English. View at: Publisher Site | Google Scholar
  8. J. Gou, Y.-X. Lei, W.-P. Guo, C. Wang, Y.-Q. Cai, and W. Luo, “A novel improved particle swarm optimization algorithm based on individual difference evolution,” Applied Soft Computing, vol. 57, pp. 468–481, 2017, in English. View at: Publisher Site | Google Scholar
  9. M. He, Y. Hu, H. Chen et al., “Lifecycle coevolution framework for many evolutionary and swarm intelligence algorithms fusion in solving complex optimization problems,” Swarm and Evolutionary Computation, vol. 47, pp. 3–20, 2019, in English. View at: Publisher Site | Google Scholar
  10. A. Askarzadeh, “A novel metaheuristic method for solving constrained engineering optimization problems: crow search algorithm,” Computers & Structures, vol. 169, pp. 1–12, 2016, in English. View at: Publisher Site | Google Scholar
  11. C. Qu and Y. Fu, “Crow search algorithm based on neighborhood search of non-inferior solution set,” IEEE Access, vol. 7, pp. 52871–52895, 2019, in English. View at: Publisher Site | Google Scholar
  12. F. Mohammadi and H. Abdi, “A modified crow search algorithm (MCSA) for solving economic load dispatch problem,” Applied Soft Computing, vol. 71, pp. 51–65, 2018, in English. View at: Publisher Site | Google Scholar
  13. S. Shekhawat and A. Saxena, “Development and applications of an intelligent crow search algorithm based on opposition based learning,” Isa Transactions, vol. 99, pp. 210–230, 2020, in English. View at: Publisher Site | Google Scholar
  14. A. Chaudhuri and T. P. Sahu, “Feature selection using Binary Crow Search Algorithm with time varying flight length,” Expert Systems with Applications, vol. 168, Article ID 114288, 2020, in English. View at: Google Scholar
  15. S. Makhdoomi and A. Askarzadeh, “Optimizing operation of a photovoltaic/diesel generator hybrid energy system with pumped hydro storage by a modified crow search algorithm,” Journal of Energy Storage, vol. 27, Article ID 101040, 2019, in English. View at: Google Scholar
  16. D. Gupta, J. J. P. C. Rodrigues, S. Sundaram, A. Khanna, V. Korotaev, and V. H. C. de Albuquerque, “Usability feature extraction using modi-fied crow search algorithm: a novel approach,” Neural Computing & Applications, vol. 32, no. 15, pp. 10915–10925, 2018, in English. View at: Publisher Site | Google Scholar
  17. H. Wu, P. Wu, K. Xu, and F. Li, “Finite element model updating using crow search algorithm with Levy flight,” International Journal for Numerical Methods in Engineering, vol. 121, no. 13, pp. 2916–2928, 2020, in English. View at: Publisher Site | Google Scholar
  18. A. Javidi, E. Salajegheh, and J. Salajegheh, “Enhanced crow search algorithm for optimum design of structures,” Applied Soft Computing, vol. 77, pp. 274–289, 2019, in English. View at: Publisher Site | Google Scholar
  19. S. Arora, H. Singh, M. Sharma, S. Sharma, and P. Anand, “A new hybrid algorithm based on grey wolf optimization and crow search algorithm for unconstrained function optimization and feature selection,” IEEE Access, vol. 7, pp. 26343–26361, 2019, in English. View at: Publisher Site | Google Scholar
  20. N. Mahesh and S. Vijayachitra, “DECSA: hybrid dolphin echolocation and crow search optimization for cluster-based energy-aware routing in WSN,” Neural Computing & Applications, vol. 31, no. S1, pp. 47–62, 2019, in English. View at: Publisher Site | Google Scholar
  21. A. M. Anter and M. Ali, “Feature selection strategy based on hybrid crow search optimization algorithm integrated with chaos theory and fuzzy c-means algorithm for medical diagnosis problems,” Soft Computing, vol. 24, no. 3, pp. 1565–1584, 2020, in English. View at: Publisher Site | Google Scholar
  22. N. R. Babu, L. C. Saikia, S. K. Bhagat, and A. Saha, “Maiden application of hybrid crow-search algorithm with particle swarm optimization in LFC studies,” in Proceedings of International Conference on Artificial Intelligence and Applications, vol. 1164, pp. 427–439, 2021, in English. View at: Publisher Site | Google Scholar
  23. H. M. H. Farh, A. M. Al-Shaalan, A. M. Eltamaly, and A. A. Al-Shamma’A, “A novel crow search algorithm auto-drive PSO for optimal allocation and sizing of renewable distributed generation,” IEEE Access, vol. 8, pp. 27807–27820, 2020, in English. View at: Publisher Site | Google Scholar
  24. D. Swagatam and P. N. Suganthan, “Problem definitions and evaluation criteria for CEC 2011 competition on testing evolutionary algorithms on real world optimization problems,” Technical Report, 2010, in English. View at: Google Scholar
  25. A. A. Heidari, S. Mirjalili, H. Faris, I. Aljarah, M. Mafarja, and H. Chen, “Harris hawks optimization: algorithm and applications,” Future Generation Computer Systems, vol. 97, pp. 849–872, 2019, in English. View at: Publisher Site | Google Scholar

Copyright © 2021 Ying-Hui Jia et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Related articles

No related content is available yet for this article.
 PDF Download Citation Citation
 Download other formatsMore
 Order printed copiesOrder
Views746
Downloads772
Citations

Related articles

No related content is available yet for this article.

Article of the Year Award: Outstanding research contributions of 2021, as selected by our Chief Editors. Read the winning articles.