Abstract

This paper proposes the hybrid CSM-CFO algorithm based on the simplex method (SM), clustering technique, and central force optimization (CFO) for unconstrained optimization. CSM-CFO is still a deterministic swarm intelligent algorithm, such that the complex statistical analysis of the numerical results can be omitted, and the convergence intends to produce faster and more accurate by clustering technique and good points set. When tested against benchmark functions, in low and high dimensions, the CSM-CFO algorithm has competitive performance in terms of accuracy and convergence speed compared to other evolutionary algorithms: particle swarm optimization, evolutionary program, and simulated annealing. The comparison results demonstrate that the proposed algorithm is effective and efficient.

1. Introduction

Unconstrained optimization is a mathematical programming technique that attempts to solve nonlinear objective functions. It has become an important branch of operations research and has a wide variety of applications in network security, cloud computation, mobile search, pattern recognition, data classification, power systems, protein structures, medical image registration, and financial market. Unconstrained optimization problems with variables may be written in the following form: where objective function may be continuous or discontinuous, highly multimodal, or “smooth”; is the dimension of object function; and and are the lower and upper boundaries of the vector , respectively.

In the past few decades, various academics proposed many methods in response to this problem. These methods may be grossly classified into two categories: swarm intelligent techniques and traditional direct search techniques. Firstly, swarm intelligent methods, such as the genetic algorithm developed by Leung and Wang, where a genetic algorithm with quantization for global numerical optimization is proposed [1], ant colony optimization developed by Neto and Filho, where an improving ant colony optimization approach to a permutational flow shop scheduling problem with outsourcing allowed is proposed [2], an improving particle swarm optimization developed by Green II et al., where neural networks trained by particle swarm optimization [3], simulated annealing developed by Kirkpatrick et al., which comes from annealing in metallurgy and is often used when the search space is discrete [4], gravitational search algorithm developed by Rashedi et al., which is based on the law of gravity and the notion of mass interactions [5], are efficient to well explore the whole search space and to localize the “best” areas. However, these methods have one shortage: results were never able to exactly repeat their results, for true random variables are used in algorithms. Recently, a new swarm intelligent algorithm, namely, central force optimization (CFO), is developed by Formato [69]. This algorithm demonstrates and leverages two characteristics that make it unique when compared to other methodologies: a basis in Newton’s law of gravity and a deterministic nature. As CFO is a very young algorithm, it has yet to be compared and contrasted against other algorithms for many different applications [9]. Pseudorandomness of CFO is discussed in [10]. Every CFO run with the same setup returns precisely the same values step by step throughout the entire run. Nevertheless, effective implementations substantially benefit from a “pseudorandom” component that enters the algorithm indirectly, not through its basic equations. Although pseudorandomness is not required in CFO, numerical experiments show that it is an important feature in effective implementations.

Secondly, direct search methods include hill climbing which is an iterative algorithm that starts with an arbitrary solution to a problem and then attempts to find a better solution by incrementally changing a single element of the solution and simplex method (SM) which is developed by Nelder and Mead [11] that is a well-defined numerical method for problems for which derivatives may not be known. These methods are more efficient than the previous ones for the exploitation of the best areas already detected. However, one has to be very careful when using these methods since it is very sensitive to the choice of initial points and not guaranteed to find the best possible solution (the global optimum) out of all possible solutions.

Although standard CFO is a deterministic algorithm, the inherent drawback with most of the population based stochastic algorithms is premature convergence. Any swarm intelligence algorithm is regarded as an efficient algorithm if it is fast in convergence and able to explore the minimum area of the search space. In other words, if CFO is capable of balancing between exploration and exploitation of the search space, then CFO is regarded as an efficient algorithm. Also some numerical experiments proved that stagnation is another inherent drawback with CFO; that is, CFO sometimes stops proceeding towards the global optima even though the population has not converged to local optima or any other point [8, 12]. The problems of premature convergence and stagnation worth considering for designing an efficient improving CFO algorithm.

In this study, a hybrid algorithm is proposed. The motivation of such a hybrid is to explore a better trade-off between computational cost and global optimality of the solution attained. Generally hybrid methods achieve better solutions than “pure” methods and converge more quickly. Similar ideas have been discussed in hybrid methods using evolutionary algorithm and direct search technique [1315]. However, these hybrid methods are stochastic.

The current study investigates the deterministic hybridization of the simplex method and central force optimization, and performance of the hybrid algorithm is compared with other pertinent alternatives via simulations. In order to overcome the shortage of the simplex method, we propose a clustering algorithm to select suitable vertices from population. To have a better clustering effect, a unique method, namely, good points set method, is embedded in clustering algorithm.

The structure of the rest of the paper is as follows. Section 2 gives us a review on the related works. In Section 3, we point out the drawback of CFO and present a new hybrid algorithm. In Section 4, we test the new algorithm against a variety of multidimension functions, all drawn from recognized “benchmark suites” (complex functions with analytically known minima). Finally, Section 5 outlines the conclusions, followed by Appendix which shows the benchmark functions.

2. Central Force Optimization and Simplex Method

2.1. Central Force Optimization Algorithm (CFO)

In CFO, every individual of population is named probe. Probes are attracted by gravitation based on the defined mass. Probes are considered as objects and their performances are measured by the fitness function. In other words, each mass represents a solution, which is navigated by adjusting the position properly according to the Newton universal law of gravitation.

CFO comprises three steps: (1) initialization; (2) computation of probe’s acceleration; and (3) motion. A general working flow of CFO is shown in Table 1.

In the first step, a population of probes is created in search space. The initial position and acceleration vectors are set to zero. In the second step, the compound acceleration vector of one probe from components in each direction is calculated according to the Newton universal law. Mass is a user-defined function from the object function to be minimized. In the -dimensional search space with probes , , the th probe operates according to the following formula: where , are the position and acceleration vectors for the th probe at the th generation, respectively, is the fitness of the th probe, namely, the objective function value, and represents a piecewise function and is the number of iterations: In (2), , , and do not represent the concrete gravitational fundamentals. In addition, in order to prevent the probe from flying away from the search space, it is required to detect the position of the probe. If the probe is out of the range, it is pulled back to the search space.

In third step, according to the acceleration calculated previously, the position vectors of probes are updated based on the Newtonian formula. If acceleration is exerted, the th probe will move from to according to the motion equation where represents the step. The position of probe is updated based on last “mass” information as a deterministic gradient algorithm.

The convergence conditions of CFO have revealed that it will converge to the optima that have searched so far, which is not worse than the predefined one in initial distributions [9].

Standard Central Force Optimization Algorithm

Step 1 (parameters initialization). Set the objective function’s dimension , boundaries , , total number of probes , gravitational constant , and acceleration parameters and .

Step 2 (population initialization). Compute initial probe distribution , fitness matrix , position vectors , and acceleration vectors .

Step 3 (loop on time step). Consider the following.
Step 3.1. Compute probe position vectors .
Step 3.2. If probe flies out of the boundary, we retrieve it.
Step 3.3. Update fitness matrices of current probes.
Step 3.4. Compute accelerations vectors for next time step.

Step 4. Increase time step and repeat Step 3 until stopping criterion has been met.

2.2. Nelder-Mead Simplex Method (SM)

Nelder-Mead simplex method (SM) is a local search method designed for unconstrained optimization, known as direct search methods. In contrast to more local optimization methods, SM is a derivative-free method as it does not require any information about the gradient (or higher derivative) of the objective function when searching for an optimal solution. Therefore, SM is particularly appropriate for solving noncontinuous, nondifferentiable, and multimodal optimization problems.

It uses four parameters: reflection, expansion, contraction, and size of the simplex to move in the design space based on the values at the vertices and center of the triangle.

It is an -dimensional, closed geometric figure in space that has straight line edges intersecting at vertices. The values of vertex points functions are calculated, and the vertex point with the maximum value, the vertex point with the minimum value, and the vertex point with the second largest value of the objective function are obtained, respectively. The basic move is a reflection to generate a new vertex point . The choice of reflection direction and the choice of the new vertex depend on the location of the worst point in the simplex. The new point is called the “complement” of the worst point . If any “new point” is the worst point in the new simplex, the algorithm would oscillate; in other words, it would bounce back and forth between this point and the earlier worst point. When this happens, the second worst point is used as the point to use to find the next “new point.” As the simplex moves through the design space, its centroid moves toward the extremum. If the edges of the sides of the simplex are allowed to contract and expand, we can see how the method could accelerate toward the optimum. This method has the versatility of adapting itself to the local landscape of the merit surface.

Nelder-Mead Simplex Method Algorithm

Step 1 (order). Evaluate objective function values at the vertices of simplex and sort it according to the function values .

Step 2 (reflection). Compute the reflection point from , where is the center of gravity of all points except ; that is, . If , replace with .

Step 3 (expansion). If , then compute the expansion point from . If , replace with ; otherwise replace with .

Step 4 (outside contraction). If , compute the outside contraction point . If , replace with . Otherwise go to Step 6.

Step 5 (inside contraction). If , compute the inside contraction point from . If , replace with . Otherwise, go to Step 6.

Step 6 (reduction). For all but the point , replace the point with for all . If the stopping criterion is satisfied, then STOP. Otherwise, go to Step 1.

3. Hybrid CSM-CFO Method

This section introduces the hybrid method and, in doing so, also demonstrates that the convergence of the SM and the accuracy of the CFO can be further improved simultaneously.

A good mix strategy should be able to integrate the global exploration and the local search organically in the design of the algorithm: global exploration algorithm can constantly search in the decision domain of the issue, avoiding premature convergence phenomenon; local search can make a deep search in the region which may contain the optimal solution, promoting a faster convergence for the global optimal solution. Based on this idea, the simplex method (SM) is introduced into the CFO in this paper, improving the local search capability. With this approach, simplex structures are periodically formed starting from the current optimal solution of CFO, which helps the algorithm be able to deeply develop the global optimum.

The model of SM-CFO algorithm is shown in Figure 1, containing three main steps: global search with CFO, local search with clustering SM, and probes migration. The migration operation of the probe can effectively link the exploration and exploitation for the convergence of CSM-CFO algorithm. In order to improve the response speed of SM and ensure the convergence speed, we design the following migration strategy: after several evolutionary generations of the probe populations, the population will be subdivided into two classes. One class which includes the best probe will be chosen to construct a simplex and run the Nelder-Mead simplex method. The worst probe is altered by the exploration point generated by the SM operation. The flow chart of SM-CFO algorithm is shown in Figure 2.

Numerical experiments show that it is not very satisfactory to evaluate the probe by simply using the objective function value as the fitness function. The range of function values varies largely for different problems. This will make the algorithm not convergent or overflowing when parameters’ setting is inappropriate. So, parameters’ adjustment is an extremely difficult assignment. In order to solve this problem, we give the following definition on fitness:

Then we can select easily a set of fixed parameters of CSM-CFO by numerical experiments for different problems. Moreover, (5) can also make CSM-CFO more robust and efficient.

In CSM-CFO the main structure and properties of CFO are preserved. However, to give the ability of handling the complex or ill functions to CFO, we apply some modification on CFO inspired by simplex method and cluster. It is noted that the proposed algorithm is able to form niches. According to the above definition, steps of the proposed CSM-CFO are as follows.

Algorithm 1 (CSM-CFO). Consider the following.
Step 1 (initialization). Generate the initial swarm of individuals with “3-D Fano load matching network” [6].
For to , to : and set . Each individual represents a candidate solution and is considered as a real valued vector , where is the population size, is the dimension of objective function, and is decision variables to be optimized. and are upper and lower boundaries of variables. Set parameters of simplex method: reflection, contraction, and expansion coefficients. Operation period of cluster simplex method is set to . Current populations are divided into subpopulations.
Step 2 (fitness evaluation). Calculate fitness value of each individual ,  () with (5).
Step 3 (computation of position, acceleration, and velocity). At time , the position, acceleration, and velocity of each individual are computed according to (2) and (4), respectively.
Step 4 (cluster). If , populations are divided into subpopulations with Algorithm 2; then go to Step 5.
Step 5 (simplex method search). One subpopulation is chosen which includes the best individual. Simplex method is run in this subpopulation. The improving best individual is moved into populations and replaces the worst individual; then go to Step 6.
Step 6.  ; Steps 2–5 are repeated until the stopping criterion is satisfied.

Algorithm 2 (clustering algorithm). Consider the following.
Step 1. Produce a reference point from the search space by Algorithm 3.
Step 2. Search the nearest individual to , which means that point has the minimum distance from .
Step 3. Repeat Step 2 until individuals of population are selected. These individuals form a subpopulation.
Step 4. These individuals are removed from current population.
Step 5. Repeat Steps 2–4 until population is divided into subpopulations.

Algorithm 3 (good points set algorithm). Consider the following.
Step 1. Generate a point , , where is minimum prime number content with .
Step 2. Let , where is the decimal fraction of , ; then points set is called good points set.
Step 3. Map is defined as follows: , which means good point is mapped to search region .

Note 1. Vertices of simplex are generated by Algorithm 2. The population is divided into two subgroups. One group which includes the best currently probes will be chosen. If the number of this group is larger than , the best probes construct the simplex. If the number of this group is less than , some probes from another group which is nearest to the best probe will be chosen and added to the group to construct simplex. Note that the clustering algorithm is not strictly the classical clustering algorithm, for example, K-medoids algorithm, Birch algorithm, Density-Based Spatial Clustering of Applications with Noise (DBSCAN) algorithm, and others. In this paper, we focus on dividing the whole population into a number of subpopulations. However, enhancing the result of Algorithm 2, a reference point for Algorithm 2 is generated by Algorithm 3 (good points set method) [16, 17].
The idea of good points set make the points set distributes more evenly than random points. When we generate the reference point of Algorithm 2 by means of good points set method, obviously, the points are even and representative (see Figure 3). This will help Algorithm 2 get better result than generating reference point randomly.

Note 2. The local stopping criterion is defined as where denotes the vertices of current simplex and denotes the Euclidean norm. In order to achieve quicker convergence, the simplex method will stop when either (7) is satisfied or the given number of iterations is reached. In the following numerical experiment, is 10−2 and the number of iterations is 10. Stopping criterion of CSM-CFO is the maximum number of function evaluations reached.

4. Numerical Experiments and Analysis

To evaluate the performance of the proposed algorithms, a number of internationally recognized standard test functions (i.e., ) [18] and CEC benchmark functions [19] are selected. The standard test functions are divided into three classes: benchmark function set (1) unimodal functions, benchmark function set (2) multimodal functions, and benchmark functions set (3) fix dimension functions, as shown in Tables 1, 2, and 3. These functions are widely used in published papers and are very difficult to track the global minimum. For example, for , the existing local optima can misguide the population to move away from the true global optimum; have more global minimum, many local minima around them. CEC benchmark functions are more complex than standard test functions.

4.1. Standard Test Functions

Tables 13 represent the functions used in our numerical experiment. In these tables, is the dimension of function and is the search space. A detailed description of the functions 1–23 is given in the appendix.

4.2. Experimental Setting

A comparative study is done to assess the effectiveness of the CSM-CFO by experiments on test functions. The parameter settings of CSM-CFO are summarized as follows. The initial population of CSM-CFO is generated by “3-D Fano load matching network” in [6]. The population size is . The gravitation constant is 1 and is 2. The constant is 15. Reflection coefficient, contraction coefficient, and expansion coefficient are 1, 1, and 0.5, respectively. Termination parameter is dimension of test functions and dimension of CEC benchmark functions is 30.

We compared the performance of CSM-CFO with that of three different evolutionary algorithms:(1)the NM-SM PSO method developed by Fan and Zahara (NM-PSO) [14];(2)evolutionary programming developed by Yao et al. (EP) [18];(3)hybrid simulated annealing method developed by Hedar and Fukushima (DSSA) [20].All the control parameters, for example, mutation rate of the EP, inertia weight of the NM-PSO, temperature of DSSA, and so forth, were set to be default as recommended in original articles. In addition, the maximum number of function evaluations for the standard test functions was set to be 150 000. The maximum number of evaluations for the CEC benchmark functions was set to be 300 000.

4.3. Comparison of Algorithm Performance

The results of the performance evaluation, as achieved by the four algorithms for the four types of functions, unimodal, multimodal high dimensional, multimodal functions with fixed-dimensional, and CEC benchmark functions, are shown.

4.3.1. Unimodal Functions

Functions to are unimodal functions for which the focus is on the convergence rate because current optimization algorithms are already able to present global optima that are close to the actual optima. However, unimodal functions have been adopted to assess the convergence rate of evolutionary algorithms. Therefore, the aim is to obtain the best global minimum in the least number of required iterations. We tested the CSM-CFO on a set of unimodal functions in comparison with the other three algorithms. Table 4 lists the mean and standard deviations of the function values in the last iteration. As Table 4 illustrates, CSM-CFO generated significantly better results than EP on all the unimodal functions. From comparisons of CSM-CFO, NM-PSO, and DSSA, we can see that CSM-CFO had significantly better performance on functions 3, 4, 5, and 7. In summary, the search performance of the four algorithms tested can be ordered as CSM-CFO > NM-PSO > DSSA > EP.

4.3.2. Multimodal Functions

This set of benchmark functions have more than one local optimum but can either have a single or more than one global optimum. For multimodal functions, the final results are more important since they reflect the ability of the algorithm in escaping from poor local optima and locating a near-global optimum. We have carried out experiments on to where the number of local minima increases exponentially as the dimension of the function increases. The dimension of these functions is set to 30. The results of NM-PSO, EP, and DSSA are averaged over 30 runs and mean, standard deviations of the function values are reported for these functions in Table 5.

From Table 5, it is clear to see that for four of the tested benchmark functions, CSM-CFO outperformed other algorithms. For functions 8, 9, 10, and 13 especially, CSM-CFO provides much better solution than other algorithms. However, for functions and , CSM-CFO cannot tune itself and has not a good performance. NM-PSO outperformed CSM-CFO statistically. This is also in accord with “no free lunch” theorem. It can be concluded from Table 5 that the order of the search performance of these four algorithms is CSM-CFO > NM-PSO > DSSA > EP.

4.3.3. Functions in Fixed Dimension

This set of benchmark functions has fixed dimensions and has only a few local minima. Compared to the multimodal functions with many local minima (), this set of functions is not challenging: some of them can even be solved efficiently by deterministic algorithms.

From Table 6, we can see that, in comparison to EP, NM-PSO and DSSA achieved better results on all benchmark functions. In comparison with NM-PSO, it can be seen that CSM-CFO has a better performance on most of the functions except the function 21 where NM-PSO generated better average results than CSM-CFO. Table 6 shows the comparison between CSM-CFO, CFO, and PSO on multimodal fixed-dimensional test functions of Table 3. The results show that CSM-CFO, CFO, and PSO have similar solutions and the performances are almost the same. It indicates that there is no big difference between CSM-CFO and other algorithms when dimension of test functions is lower. From Table 6 we can see that the order of the search performance of these four algorithms is CSM-CFO ≈ NM-PSO > DSSA > EP.

4.3.4. CEC Benchmark Functions

For the sake of testing the performance of the proposed algorithm and illustrating our arguments about searching on high-dimensional problems, we choose five recently developed novel composition functions, CEC benchmark functions. CEC will provide a set of test functions for competition every year. For example, CEC2013 LSGO benchmark suite is currently the latest proposed benchmark in the field of large-scale optimization. The topic of CEC2005 is single objective global optimization, and a set of benchmark functions is published. We chose CEC1–CEC5 as test functions to test these four algorithms. Details of constructions and properties of the composition functions can be found in [19].

The results of CSM-CFO are compared with the performance of the other three algorithms which are presented in Table 7, where the mean and standard deviation from 30 independent runs are listed except for CSM-CFO. On CEC1 to CEC5, CSM-CFO achieves better results compared to other algorithms. However, there is no significant difference for CSM-CFO and NM-PSO. From Table 7 we can see that the order of the search performance of these four algorithms is NM-PSO > CSM-CFO > DSSA > EP.

5. Conclusion

In this paper, a hybrid CSM-CFO algorithm is presented for locating the global optima of continuous unconstrained optimization problems. New algorithm preserves the main merits of central force optimization that the results will not change when initial population is unchanged. Clustering technique and good point set method are used to enhance the robustness of simplex method. The substantial improvements upon the effectiveness, efficiency, and accuracy are reported to justify the claim that the hybrid approach presents an excellent trade-off between exploitation in simplex method and exploration in central force optimization.

In order to evaluate proposed algorithm, it is tested on a set of standard benchmark problems. The results obtained by CSM-CFO in most cases provide superior results and in all cases are comparable with NM-PSO, EP, and DSSA. The application of CSM-CFO in the large-scale optimization problems and the improvement of the convergence speed need to be studied further and will be our ongoing work in the future.

Appendix

The 23 test functions we employed are given below. To define the test functions, we have adopted the following general format.(Dimension): name of function(a)function definition and search space;(b)global optimum. Test FunctionsFunction 1 (30-D): sphere:(a), ;(b)global optimum with at .Function 2 (30-D): Schwefel 2.22:(a), ;(b)global optimum with at .Function 3 (30-D): Schwefel 1.2:(a), ;(b)global optimum with at .Function 4 (30-D): Schwefel 2.21:(a), ;(b)global optimum with at .Function 5 (30-D): Rosenbrock:(a), ;(b)global optimum with at .Function 6 (30-D): step function:(a), ;(b)global optimum with at .Function 7 (30-D): quartic function:(a), ;(b)global optimum with at .Function 8 (30-D): Schwefel 2.26:(a), ;(b)global optimum with at .Function 9 (30-D): Rastrigin:(a), ;(b)global optimum with at .Function 10 (30-D): Ackley:(a), ;(b)global optimum with at .Function 11 (30-D): Griewank:(a), ;(b)global optimum with at .Function 12 (30-D): generalized penalized function:(a) , ;(b)global optimum with at .Function 13 (30-D): generalized penalized function:(a) , , where : (b) global optimum with at .Function 14 (2-D): Shekel’s foxholes:(a) , , where (b) global optimum with at .Function 15 (4-D): Kowalik:(a), ;(b)global optimum with at .Function 16 (30-D): six-hump camel:(a), ;(b)global optimum with at and .Function 17 (30-D): Branin:(a), ;(b)global optimum with at and .Function 18 (2-D): Goldstein-Price:(a), ;(b)global optimum with at .Function 19 (3-D) and function 20 (6-D): Hartman’s family:(a), , with for and ; the coefficients are defined by Tables 8 and 9, respectively;(b)global optimum with at ; global optimum with at .Function 21 (3-D), function 22 (3-D), and function 23 (6-D): Shekel’s family:(a), , with and 10 for , , and , respectively;(b)these functions have five, seven, and ten local minima for , , and , respectively; , for ; the coefficients are defined in Table 10.

Conflict of Interests

The authors declare that there is no conflict of interests regarding the publication of this paper.

Acknowledgments

The authors are grateful to the anonymous referees for their careful reading of the paper and numerous suggestions for its improvement. This work was supported by the National Natural Science Foundation of China under Grants nos. 61272119, 11301414, and 11226173.