Abstract

We propose a new optimization algorithm inspired by the formation and change of the cloud in nature, referred to as Cloud Particles Differential Evolution (CPDE) algorithm. The cloud is assumed to have three states in the proposed algorithm. Gaseous state represents the global exploration. Liquid state represents the intermediate process from the global exploration to the local exploitation. Solid state represents the local exploitation. The best solution found so far acts as a nucleus. In gaseous state, the nucleus leads the population to explore by condensation operation. In liquid state, cloud particles carry out macrolocal exploitation by liquefaction operation. A new mutation strategy called cloud differential mutation is introduced in order to solve a problem that the misleading effect of a nucleus may cause the premature convergence. In solid state, cloud particles carry out microlocal exploitation by solidification operation. The effectiveness of the algorithm is validated upon different benchmark problems. The results have been compared with eight well-known optimization algorithms. The statistical analysis on performance evaluation of the different algorithms on 10 benchmark functions and CEC2013 problems indicates that CPDE attains good performance.

1. Introduction

Nowadays, there are many problems which can be treated as optimization problems in real-world application. These problems are always complex because they are multimodal, high dimensional, discontinuous, multiobjective, dynamic, and so forth. Traditional optimization methods often have poor results for some problems. Therefore, intelligent optimization algorithms are designed by imitating natural evolution. These algorithms which are called artificial evolution (AE) [1] solve the difficult design and optimization problems by building solutions that are more fit relative to desired properties. Many results show that the intelligent optimization algorithms are often superior to traditional optimization methods for solving practical optimization problems. Popular intelligent optimizations include Particle Swarm Optimization (PSO) [2], differential evolution (DE) [3], artificial bee colony algorithm (ABC) [4], artificial immune system (AIS) [5], and ant colony algorithm (ACO) [6].

Particle Swarm Optimization (PSO) is a bionic intelligent computing method proposed by Kennedy and Eberhart which imitates the flying and the foraging behavior of birds. The PSO’s concept is based on the movement of particles and their personal and best individual experiences [2]. PSO has many advantages such as fast convergence speed, high quality solutions, and good robustness in multidimensional space function optimization, especially for engineering applications. In PSO, the particles adjust their speed and position dynamically by sharing information and experiences of the best particles. The information sharing strategy reduces the diversity of the particle swarm, because all particles except itself only share the optimal particle information while ignoring other particles information. Therefore, the algorithm is easy to fall into premature convergence, low search accuracy, and low efficiency in later iterations for solving complex problems. Researchers have studied and proposed many improvement strategies for solving these problems. Clerc and Kennedy proposed PSO with constriction factor (PSOcf) [7] based on PSO by studying a particle’s trajectory as it moves in discrete time. In PSOcf, a five-dimensional depiction is developed for controlling the explosion that results from randomness in the system. Coefficients are applied to various parts of the formula in order to guarantee convergence, while encouraging exploration. Then, Mendes et al. proposed fully informed Particle Swarm Optimization (FIPS) and PSOwFIPS in [8]. FIPS is the fully informed particle swarm with returning a constant, that is, where all contributions have the same value. PSOwFIPS is a full informed swarm where the contribution of each neighbor was weighted by the goodness of its previous best. Parsopoulos and Vrahatis proposed Unified Particle Swarm Optimization (UPSO) [9] based on PSO. The algorithm merges the local and global variants of PSO to improve their exploration and exploitation abilities without imposing additional requirements in terms of function evaluations. By employing multiple swarms to optimize different components of the solution vector cooperatively, a Cooperative Approach to Particle Swarm Optimization is proposed in [10] and is named as the cooperative particle swarm optimizer (CPSO). H-PSOA [11] is a hierarchical version of the Particle Swarm Optimization (PSO) metaheuristic. The algorithm uses a dynamic hierarchy to define a neighborhood structure. For dealing with different complex situations, self-learning particle swarm optimizer (SLPSO) is proposed in [12]. In SLPSO, each particle has a set of four strategies. The cooperation of the four strategies can enable a particle to choose the optimal strategy according to its own local fitness landscape. A PSO with an aging leader and challengers (ALC-PSO) [13] is proposed to overcome the problem of premature convergence without significantly impairing the fast-converging feature of PSO. The idea of the algorithm is to assign the leader of the swarm with a growing age and a lifespan and to allow the other individuals to challenge the leadership when the leader becomes aged. By improving the current existing teaching-learning-based optimization (TLBO) framework, a teaching and peer-learning PSO (TPLPSO) and a bidirectional teaching and peer-learning PSO (BTPLPSO) are proposed in [14, 15]. BTPLPSO uses two learning phases, namely, the teaching and peer-learning phases to solve the problems such as the particles fail to improve their fitness during the searching process. Recently, Jordehi [16] proposed enhanced leader PSO (ELPSO) based on a five-staged successive mutation strategy. Comprehensive learning particle swarm optimizer (CLPSO) [17] uses a novel learning strategy whereby all other particles’ historical best information is used to update a particle’s velocity. Dynamic Multiswarm Particle Swarm Optimization (DMS-PSO) [18] was constructed based on the local version of PSO. The whole population is divided into a large number of subswarms. These swarms use their own particles to search for better solutions.

Differential evolution (DE) based on individual differences, introduced by Storn and Price, is a simple evolutionary algorithm [3]. It is an effective, robust, and simple global optimization algorithm which only has a few control parameters [19]. In DE, an intermediate population is produced by restructuring the difference between the individuals of the current population. Then, the algorithm uses the parent population and offspring population to guide the optimization process. The biggest difference between differential evolution algorithm and other evolutionary algorithms is the mutation operation. At the beginning of the evolution, the difference between the mutation individual and random individual is larger. Therefore, the global search ability is relatively strong. In the later evolution, there is little difference between individuals and the local search ability is improved. According to the experimental studies and theoretical analyses, the control parameter settings are important for the performance of differential evolution. Inappropriate control parameters may lead to premature convergence or stagnation in the evolutionary process. Therefore, many researchers have studied the choices of control parameters and mutation strategies for improving the performance of DE. Brest et al. proposed self-adapting control parameters in differential evolution (jDE) [20]. The parameter control technique of jDE is based on the self-adaptation of two parameters (F and CR), associated with the evolutionary process. Jingqiao and Arthur [21] proposed adaptive differential evolution with optional external archive (JADE). In JADE, a set of recently explored inferior solutions is considered instead of the best solutions previously explored. These solutions are used to guide the optimum direction and to improve the population diversity. Moreover, a new greedy mutation strategy is introduced, “DE/current-to-pbest” with optional external archive. In addition, the proposed crossover rate repair technique is also incorporated into a self-adaptive DE (SaDE) [22] and an ensemble of mutation strategies and control parameters with the DE (EPSDE) [23]. Recently, researchers have made a number of improvements in DE. A binary learning differential evolution (BLDE) algorithm [24], which is inspired by the learning mechanism in Particle Swarm Optimization (PSO) algorithms, can efficiently locate the global optimal solutions by learning from the last population. Inspired by the phenomenon of fireworks explosion, an improved fireworks optimization method is proposed in [25]. An enhanced self-adaptive differential evolution (ESADE) [26] is proposed to solve the problem that users always have to set the control parameters of DE according to every single different problem. A self-adaptive DE algorithm with discrete mutation control parameters (DMPSADE) [27] is proposed to automatically adjust the control parameters during the entire evolution process. An adaptive hybrid DE algorithm (AH-DEa) is proposed in [28]. The algorithm employs a binomial crossover in early stages of evolution for exploration, while an exponential crossover is employed for exploitation in later stages. Adaptive invasion-based migration model for dDE (AIM-dDE) [29] is a novel adaptive model endowed with three updating schemes for randomly setting DE control parameters. A self-adaptive DE algorithm based on Gaussian probability distribution, gamma distribution, and chaotic sequence (DEGC) [30] is proposed, where the control parameters are self-adaptive. A differential evolution algorithm with self-adaptive multicombination strategies (SAS-DE) is proposed in [31]. SAS-DE combines the strengths of four mutations, two crossover operators, and two constraint handling techniques.

So far, differential mutation operator can adaptively adjust the search step according to the landscape of the objective function and convergence state for solving simple problems. However, the simple differential mutation is not good at guiding the population evolution for solving complex problems. Therefore, many parameter control mechanisms are proposed in the aforementioned references to improve the exploration ability and exploitation ability of the algorithm. Inspired by the formation and change of cloud in nature, we introduce a new algorithm, namely, Cloud Particles Differential Evolution (CPDE). The essence of CPDE is to provide a phase transformation mechanism for promoting the nucleus (the leader) to lead the swarm through the evolution. An individual of the population corresponds to a cloud particle. The best solution found so far acts as a nucleus. We use the nucleus to generate new population by cloud generator. Then all the individuals will round the nucleus, thereby providing a more promising searching direction toward the global optimum. However, the misleading effect of a nucleus positioned at a local optimum point may cause the premature convergence. To resolve the issue, we introduce a new mutation strategy called cloud differential mutation. The cloud differential mutation can improve the population diversity so that the premature convergence can be alleviated.

The rest of the paper is organized as follows. Section 2 describes the basic procedure of differential evolution. Section 3 states the basic idea and principle of CPDE in detail. Simulation results are presented and are discussed in Section 4 for the comparison of CPDE with other eight evolutionary algorithms. Finally, the conclusions and ideas for future research are given in Section 5.

2. Classical Differential Evolution

DE is a simple but effective population-based stochastic search technique for solving global optimization problems. Mathematically, an OP (optimization problem) can be formulated as follows:where is the number of decision variables and and denote the minimum and maximum bounds of , respectively.

2.1. Initialization of the Population

DE begins with a randomly initiated population of NP -dimensional real-valued parameter vectors: The initial value of the th component in the th vector is produced by where denotes the population size and represents a uniformly distributed random variable within the range .

2.2. Mutation

After initialization, DE employs the mutation operation and arithmetic recombination to generate a mutant vector with respect to each individual , so-called target vector, in the current population at the generation . There are many mutation strategies available in the literature [21, 32, 33]; the classical one is “DE/rand/1”:

The indices , , and are mutually exclusive integers randomly generated within the range , which are also different from the index . The scaling factor is a positive control parameter for scaling the difference vector.

2.3. Crossover

After mutation, crossover operation is applied to increase the potential diversity of the population. A trial vector is produced through crossover operation. In the basic version, DE employs the binomial (uniform) crossover defined as follows:

In (4), the crossover rate is a user-specified constant and appears as a control parameter of DE. is a uniformly distributed random number. is a randomly chosen index. The binomial crossover is performed on each of the variables. If , the binomial crossover operator copies the th component of the mutant vector to the corresponding element in the trial . Otherwise, it is copied from the corresponding target vector . The aim of introducing the condition is to ensure that there will be some differences between the trial vector and the target vector by at least one component.

2.4. Selection

DE adopts greedy algorithm to select the population for the next generation. The selection operation can be expressed as follows:

DE includes four steps which are initialization, mutation, crossover, and selection.

3. Cloud Particles Differential Evolution Algorithm

The classic DE has two control parameters and that need to be tuned by the user. Better control parameter values are helpful to increase the population diversity and avoid trapping to a local minimum. However, these parameters are problem-dependent and thus a trial-and-error is required to find their appropriate values for each specific problem. Obviously, it may expend a huge amount of computational costs. Motivated by these observations, we develop a CPDE algorithm, in which the populations evolve toward the direction of optimum according to their previous experiences of generating promising solutions.

3.1. Cloud Formation

The idea of the proposed CPDE is inspired from cloud formation in nature. The state of the cloud particles’ changing process [34] is shown in Figure 1.

When the water vapor is saturated, it will adsorb on the cloud condensation nuclei (CNN). Then, the cloud particles are produced. Cloud particles are condensed and sublimated continuously by absorbing water vapor around the cloud. In the process of the condensation, the cloud particles are getting closer and are colliding to be a bigger one, which is called large cloud particle. Then, a large number of supercooled water droplets are generated from the changes of large cloud particles. Ice crystals begin to grow up as supercooled water droplets evaporate. When ice crystals are getting bigger and bigger, they will be able to overcome the air resistance and buoyancy, and they will fall to the ground. Therefore, snow or hailstone is formed. The snow or hailstone in our planet will be irradiated by the sunshine, and it will become the water vapor again in the air.

3.2. Description of the Proposed Algorithm

There are various kinds of substance in nature. Substances commonly exist in three states: gaseous, liquid, and solid. For example, Figure 2 is a simple diagram to show how water transforms among the three states. Liquefaction, solidification, and sublimation are the three ways for water transforming. In order to describe the different forms of the substance, the phase can be used as a symbol for states of substances. A phase is defined as a homogenous portion in a material that has its own physical and chemical characteristics and properties [35]. A phase transition is the transformation of a thermodynamic system from one phase or state of matter to another one.

After a serious study of the cloud change in nature and three states of change of water, the optimization process of the cloud particles can be divided into three stages: cloud gaseous phase stage, cloud liquid phase stage, and cloud solid phase stage. Cloud gaseous phase corresponds to the initial state of the cloud particles. Cloud liquid phase corresponds to the supercooled water droplets in nature. Cloud solid phase corresponds to the hailstones in nature.

The proposed algorithm which is similar to other metaheuristic algorithms begins with an initial population called the cloud particles. At the beginning, let the initial state be cloud gaseous phase which is composed of many cloud particles (shown in Figure 3). The best individual (best cloud particle) is chosen as a nucleus. Then, the cloud particles are going to have a condensational growth or collisions growth to form the larger cloud particles (shown in Figure 4). If the population evolves successfully, the cloud particles are going to change from cloud gaseous phase to liquid phase by liquefaction. This process is called phase transformation. When cloud particles are in liquid phase, cloud particles transform from global exploration to local exploitation gradually. In this process, the local exploitation ability is improved. The cloud particles carry out macrolocal exploitation in liquid phase. When the optimal solution area is found, it will change from the liquid phase to solid phase by phase transformation. When cloud particles are in solid phase (shown in Figure 5), the algorithm begins to accelerate convergence to find the optimal solution. The cloud particles perform microlocal exploitation in solid phase. Here, the hailstone represents the optimal solution.

The optimization process of CPDE model is shown in Figure 6.

In CPDE, the initialization, condensation operation, liquefaction operation, cloud differential mutation, and solidification operation are described as follows.

(1) Initialization. When the metaheuristic methods are used to solve the optimization problem, one population is formed at first. Then, the variables which are involved in the optimization problem can be represented as an array. In this algorithm, one single solution is called one “cloud particle.” The cloud particles have three parameters [36]. They are expectation (), entropy (), and hyperentropy (). is the expectation values of the distribution for all cloud particles in the domain. is the range of domain which can be accepted by linguistic values (qualitative concept). In other words, is ambiguity. is the dispersion degree of entropy (). That is to say, is the entropy of entropy. In addition, can be also defined as the uncertainty measure for entropy. In dimensional optimization problems, a cloud particle is an array of . This array is defined as follows:

To start the optimization algorithm, a matrix of cloud particles whose size is is generated by cloud generator (i.e., population of cloud particles). Cloud particles generation is described as follows. Firstly, a set of data () is achieved by normal distribution, among which is expectation and is variance. Secondly, normal distribution is used again to generate the cloud particles, among which is expectation and is variance.

The cloud particle generated by cloud generator is given as (rows and column are the number of population and the number of design variables, resp.)

Each of the decision variable values can represent a real value for continuous or discrete problems. The fitness of a cloud particle corresponds to the objective function value. In (7), N and stand for the number of cloud particles and the number of decision variables, respectively. The cloud particles whose number is are generated by the cloud generator. The cloud particle which has the optimal value is selected as the nucleus. The total population consists of subpopulations. Each subpopulation of cloud particles has a nucleus. Therefore, there are nuclei.

(2) Condensation Operation. The population realizes the global exploration in cloud gaseous phase by condensation operation. Here, we use the differential evolution theory. The algorithm generates a new cloud particle for each in the population by adding the weighted difference between two randomly selected cloud particles to a new one:where the indexes , , and are mutually exclusive integers randomly chosen from the range ().

Then, we obtain a trial vector by mixing the components of and :where is the dimension of decision variables and .

The suggested choices by Storn in [3, 37] are and . Here, we have used and . Our decision for using those values is based on proposed values from the literature [3, 37, 38].

Finally, is replaced by if is better according to (5).

(3) Liquefaction Operation is

represents the best solution of th subpopulation. is the entropy of the th subpopulation. is hyperentropy of the th subpopulation. lf is the factor of liquefaction. is the threshold. represents the number of the subpopulations. N stands for the population size.

We use cloud generator to produce the cloud particles with , and .

In liquid phase, the individual is updated by cloud differential mutation.

(4) Cloud Differential Mutation is where and is a uniformly distributed random number. and are two cloud particles generated by cloud generator with nucleus, and .

(5) Solidification Operation. The algorithm realizes convergence operation in cloud solid phase by solidification operation. is the solidification factor:where . represents the number of the subpopulations.

The pseudo-code of CPDE is illustrated in Algorithms 14.    stands for the population size. Childnum represents the number of individuals of every subpopulation. Runmax represents the number of total runs. NFES represents the number of function evaluations. MaxFES represents the maximum number of function evaluations. The search space is .

(1)if State == 0 then                 % Condensation Operation
(2) for to
(3)     Generate according to (8)
(4)     Vector is generated according to (4) and (9)
(5)     Select the population () for the next generation
(6) end for
(7) if NFES == Runmax    MaxFES/N then  %Phase Transformation from gaseous to liquid
(8)     for to            % Liquefaction Operation
(9)    Calculate the for each subpopulation according to (10)
(10)      Calculate for each subpopulation according to (11)
(11)       Calculate for each subpopulation according to (12)
(12)   end for
(13)   Use cloud generator to produce cloud particles
(14)   State = 1;
(15)   end if
(16) end if

(1)if State == 1 then
(2)  for to             % Cloud Differential Mutation
(3)   Calculate the nucleus according to (13)
(4)   Calculate according to (14)
(5)   Calculate according to (15)
(6)   Generate and with cloud generator
(7)   Generate according to (16)
(8)   Vector is generated according to (4) and (9)
(9)   Select the population () for the next generation
(10) end for
(11) if NFES == Runmax    MaxFES    0.025 then %Phase Transformation from liquid to solid
(12)   
(13)   Archive =
(14)   
(15)   State = 2
(16) end if
(17) end if

(1)if State == 2 then
(2)    if then        % Solidification Operation
(3)   for to
(4)      Calculate the for each subpopulation according to (10)
(5)      Calculate for each subpopulation according to (17)
(6)      Calculate for each subpopulation according to (18)
(7)   end for
(8)   Use cloud generator to produce cloud particles (
(9)   Evaluate Population
(10)  Archive =
(11)  
(12)  else
(13)     = Archive
(14)    for to             % Cloud Differential Mutation
(15)    Calculate the nucleus according to (13)
(16)    Calculate according to (14)
(17)    Calculate according to (15)
(18)    Generate and with cloud generator
(19)    Generate according to (16)
(20)    Vector is generated according to (4) and (9)
(21)    Select the population () for the next generation
(22)    end for
(23)  end if
(24) end if

(1)Initialize (number of dimensions), (number of population); ; Childnum = ;
(2); ; ; ; ;
(3)for :
(4) for :
(5)  if is even
(6)   
(7)  else
(8)   
(9)  end if
(10)   end for
(11)  end for
(12) Use cloud generator to produce cloud particles
(13)  State = 0
(14) while requirements are not satisfied do
(15)   Evolutionary algorithm of the cloud gaseous state
(16)   Evolutionary algorithm of the cloud liquid state
(17)   Evolutionary algorithm of the cloud solid state
(18) end while
 State = 0: Cloud Gaseous Phase, State = 1: Cloud Liquid Phase, State = 2: Cloud Solid Phase

4. Experiments and Discussions

In this section, the performance of the proposed CPDE is tested with a comprehensive set of benchmark functions which include 10 different global optimization problems and CEC2013 benchmark problems. The details of 10 benchmark functions are shown in Table 1. The searching range and theory optima for all functions are also shown in Table 1. For 10 different global optimization problems, are unimodal functions and are multimodal functions. The CEC2013 benchmark functions [39] broadly fall into four different groups (based on their characteristics): namely, are unimodal functions, are multimodal functions, are expanded multimodal functions, and are hybrid composition functions. To reduce the occurrence of statistical errors, each function is independently simulated in 30 runs for comparison. The value of each function is defined as the fitness value in the algorithm. For a thorough comparison, we not only compare the mean best solution, standard deviation, maximum, and minimum of 9 algorithms for each function, but also carry out the -test [40]. The values are presented on every function of this two-tailed test with a significant level of 0.05 between CPDE and another algorithm. “General merit over contender” displays the difference between the number of better results and the number of worse results, which is used to give an overall comparison between the two algorithms. All the algorithms are implemented on the same machine with a T2450 2.4 GHz CPU, 1.5 GB memory, and windows XP operating system with MATLAB R2009b. For DE, CMAES, JADE, jDE, PSOcf, PSOwFIPs, UPSO, and ABC algorithms, the training parameters are the same as those used in the corresponding references, except for the maximal evolutionary generation and population size. The parameter values of CPDE are set to be , (the search space is ), , and used in [41]. The population size was set to 100 for all the algorithms. MaxFES was set to 60000 for 30-dimensional and 50-dimensional problems. MaxFES was set to 100000 for 100-dimensional and CEC2013 problems. MaxFES was set to 300000 for 500-dimensional and 1000-dimensional problems.

4.1. Comparison of Different Algorithms for 30-Dimensional Problems

Each column of Table 2 shows the mean best solution (), standard deviation (SD), maximum (Max), and minimum (Min) of error value between the best fitness values found in each run and the true optimal value. Table 2 indicates that CPDE outperforms the other algorithms in terms of the mean best solution, standard deviation, and maximum and minimum for the functions , , , , , and . For function , DE, CMAES, JADE, jDE, UPSO, ABC, and CPDE have the same solutions. For function , CMAES outperforms the other algorithms in terms of the mean best solution, standard deviation, maximum, and minimum. For function , the mean best solution, standard deviation, and maximum of CMAES are smaller than those of other algorithms. CPDE has the best performance in terms of minimum for function . For function , the mean best solution, maximum, and minimum of CMAES are smaller than those of other algorithms. The standard deviation of jDE is the smallest.

Different algorithms have different convergence characteristics and processes for different functions. The convergence graphs of different benchmark functions in terms of the median fitness values achieved by each of 9 algorithms for 30 runs are shown in Figure 7. Figure 7 indicates that the convergence speed of CPDE is the quickest compared to that of the other algorithms for functions , , , , , and . For function , all the algorithms can quickly converge except PSOcf and PSOwFIPS. For functions and , the convergence speed of CMAES is the quickest. For function , none of the algorithms converge to the global optimum.

Table 2 and Figure 7 conclude that the CPDE has a good performance of the solution accuracy for global optimization problems with 30 dimensions.

4.2. Comparison of Different Algorithms for 50-Dimensional Problems

Table 3 shows the results in terms of the mean best solution, standard deviation, and maximum and minimum of the solutions obtained in the 30 independent runs by each algorithm on 10 test functions. The best results among the algorithms are shown in bold.

From Table 3 it can be observed that CPDE outperforms the other algorithms in terms of the mean best solution, standard deviation, maximum, and minimum for the functions , , , , , and . For function , jDE, UPSO, and CPDE have the same solutions. CMAES performs better for function . CMAES and CPDE perform well for function . CMAES performs better in terms of the mean best solution, maximum, and minimum for function . Table 3 also gives the information about the -value on 10 test functions. Figure 8 presents the convergence graphs of different benchmark functions in terms of the median fitness values achieved by each of 9 algorithms for 50-dimensional problems.

Table 3 and Figure 8 conclude that the CPDE generally offered better performance than those of other algorithms in the paper though it performed weaker for some functions with 50 dimensions.

4.3. Comparison of Different Algorithms for 100-Dimensional Problems

To test the performance of the CPDE for high-dimensional problems, we simulated the 10 benchmark functions in Table 4 in 100 dimensions. The information about the -value on 10 test functions is presented in Table 4. The best results among the algorithms are shown in bold.

From Table 4 it can be observed that CPDE outperforms the other algorithms in terms of the mean best solution, standard deviation, maximum, and minimum for the functions , , , , , and . CMAES performs better for function . jDE performs better in terms of the mean best solution, maximum, and minimum for function . CPDE performs well in terms of the mean best solution and minimum for function . CMAES performs better in terms of the mean best solution, maximum, and minimum for function . Figure 9 presents the convergence graphs of different benchmark functions in terms of the median fitness values achieved by each of 9 algorithms for 100-dimensional problems.

Table 4 and Figure 9 conclude that the CPDE performs better than those of other algorithms for most 100-dimensional functions.

4.4. Comparison of Different Algorithms for 500-Dimensional Problems

Table 5 lists the results in terms of the mean best solution, standard deviation, maximum, and minimum of the solutions and the -value for 500-dimensional functions. The best results among the algorithms are shown in bold.

From Table 5 it can be observed that CPDE outperforms the other algorithms for all functions except , , , and . CPDE performs better in terms of the mean best solution and minimum for function . jDE performs better in terms of the mean best solution, maximum, and minimum for function . ABC performs well for function . CMAES performs better in terms of the mean best solution, standard deviation, and maximum for function . The minimum of CPDE algorithm performs well for function . Figure 10 presents the convergence graphs of different benchmark functions in terms of the median fitness values achieved by each of 9 algorithms for 500-dimensional problems.

Table 5 and Figure 10 conclude that the CPDE is a challenging method for high-dimensional global optimization.

4.5. Comparison of Different Algorithms for 1000-Dimensional Problems

The mean best solution, standard deviation, maximum, and minimum of the solutions and the -value for 1000-dimensional functions obtained from 30 independent runs are listed in Table 6. The best results among the algorithms are shown in bold.

From Table 6 it can be observed that CPDE outperforms the other algorithms for the functions , , , , , and . jDE performs better for functions and . ABC performs well in terms of the mean best solution, maximum, and minimum for function . CMAES performs better for function . Figure 11 presents the convergence graphs of different benchmark functions in terms of the median fitness values achieved by each of 9 algorithms for 1000-dimensional problems.

4.6. Comparison of Different Algorithms for CEC2013 Problems

To further verify the performance of CPDE, a set of recently proposed CEC2013 rotated benchmark problems are used. In this section, -test has also been carried out to show the differences between CPDE and the other algorithm. The function parameters are the same as those in [35], except that the stop condition of each algorithm is the maximal number of NFES. The maximal NFES for all algorithms are 100,000. The algorithm parameters are the same as the previous experiments. Table 7 gives the information about the mean best solution, standard deviation, maximum, and minimum of the solutions and -value of 30 runs of 9 algorithms on 28 test functions with The best results among the algorithms are shown in bold.

Figure 12 presents the convergence graphs of different benchmark functions in terms of the median fitness values achieved by each of 9 algorithms for CEC2013 problems.

First, to are the five unimodal benchmark functions. From Table 7, one can observe that CPDE performs well for functions to . JADE performs well for functions to except function . DE, jDE, and UPSO perform well for functions and . ABC performs well for function . Based on these observations, it may be inferred that CPDE provides significantly better solutions for unimodal functions.

Next, to are multimodal benchmark functions and to are expanded multimodal benchmark functions. CPDE performs better for functions , , , , and . DE performs better for functions and . JADE performs better for functions , , and . jDE performs better for functions and . ABC performs better for function .

Finally, to are hybrid composition functions. CPDE performs better for functions and . JADE performs better for functions and . ABC performs better for functions , , , and .

5. Conclusion

In this paper, we have described a new algorithm. Its principle of operation is based on the transformation of the cloud particles three states. It employs the phase transformation to realize better exploitation and employs cloud differential mutation to make the cloud particles capable of tackling the premature convergence. The performance of CPDE has been evaluated by using the set of benchmark functions (unimodal, basic multimodal, expanded multimodal, and hybrid composition). As shown in the simulation results, CPDE generally performs better than other algorithms for unimodal and multimodal problems. However, it may be noted that CPDE has not performed well on a few hybrid composition functions. The shortcoming may be due to the worse phase transformation mechanism of the cloud particles. In future work, we will focus on designing adaptive strategies for phase transformation to effectively provide better performance for the case of hybrid composite functions. We will also derive new ways to improve the exploration of the algorithm with the lowest expense.

Conflict of Interests

The authors declare that there is no conflict of interests regarding the publication of this paper.

Acknowledgments

The authors express their sincere thanks to the reviewers for their significant contributions to the improvement of the final paper. This research is partly supported by the support of the National Natural Science Foundation of China (no. 61272283), Science & Research Plan Project of Shaanxi Province Department of Education (no. 12JK0737), and Xi’an Science and Technology Project (no. CXY1437 (5)).