Abstract

In order to enhance the convergence capability of the central force optimization (CFO) algorithm, an adaptive central force optimization (ACFO) algorithm is presented by introducing an adaptive weight and defining an adaptive gravitational constant. The adaptive weight and gravitational constant are selected based on the stability theory of discrete time-varying dynamic systems. The convergence capability of ACFO algorithm is compared with the other improved CFO algorithm and evolutionary-based algorithm using 23 unimodal and multimodal benchmark functions. Experiments results show that ACFO substantially enhances the performance of CFO in terms of global optimality and solution accuracy.

1. Introduction

Consider the following global optimization problem:where is a real-valued bounded function and , , and are -dimensional continuous variable vectors. Such problem arises in many applications, for example, in risk management, applied sciences, and engineering design. The function of interest may be nonlinear and nonsmooth which makes the classical optimization algorithms easily fail to solve these problems. Over the last decades, many nature-inspired heuristic optimization algorithms without requiring much information about the function became the most widely used optimization methods such as genetic algorithms (GA) [1], particle swarm optimization (PSO) [2], ant colony optimization (ACO) [3], cuckoo search (CS) algorithm [4], group search optimizer (GSO) [5], and glowworm swarm optimization (GSO1) [6]. These search methods all simulate biological phenomena. Different from these algorithms, some heuristic optimization algorithms based on physical principles have been developed, for example, simulating annealing (SA) algorithm [7], electromagnetism-like mechanism (EM) algorithm [8], central force optimization (CFO) algorithm [9], gravitational search algorithm (GSA) [10], and charged system search (CSS) [11]. SA simulates solid material in the annealing process. EM is based on Coulomb’s force law associated with electrical charge process. GSA and CFO utilize Newtonian mechanics law. CSS is based on Coulomb’s force and Newtonian mechanics laws. Unlike other algorithms, CFO is a deterministic method. In other words, there is not any random nature in CFO, which attracts our attention on the CFO algorithm in this paper.

CFO, which was introduced by Formato in 2007 [9], is becoming a novel deterministic heuristic optimization algorithm based on gravitational kinematics. In order to improve the CFO algorithm, Formato and other researchers developed many versions of the CFO algorithm [1223]. In [12, 13], Formato proposed PR-CFO (Pseudo-Random CFO) algorithm. The improved implementations are made in three areas: initial probe distribution, repositioning factor, and decision space adaptation. Formato presented an algorithm known as PF-CFO (Parameter Free CFO) in [14, 15]. PF-CFO algorithm improves and perfects the PR-CFO algorithm in the aspect of the selection of parameter. Mahmoud proposed an efficient global hybrid optimization algorithm combining the CFO algorithm and the Nelder-Mead (NM) method in [16]. This hybrid method is called CFO-NM. An extended CFO (ECFO) algorithm was presented by Ding et al. by adding the historical information and defining an adaptive mass in [17] where the convergence of ECFO algorithm was proved based on the second order difference equation.

In aforementioned CFO algorithms, two updated equations were used: one for a probe’s acceleration and the other for its position. In the probe’s position updated equation which is established based on the laws of motion, the velocity is defined as zero. But the velocity has influence on the exploring ability of the CFO algorithm. Therefore, in this paper, we introduce the velocity in the probe’s position updated equation, which leads us to build a velocity updated equation like the CSS algorithm. Since the weight which can balance the global and local search ability is an important parameter in many heuristic algorithms, we introduce weight in probe’s position updated equation. If the value of weight is too large, then the probes may move erratically, going beyond a good solution. On the other hand, if weight is too small, then the probe’s movement is limited and the optimal solution may not be reached. Therefore, an appropriate dynamically changing weight can improve the performance of the heuristic algorithm. However, in most of the heuristic algorithms, the changing weight was selected empirically according to the characteristics of the problems without theoretical analysis. The gravitational constant has the same effect on the weight in CFO algorithm. Hence, this paper will further investigate the weight and gravitational constant settings by employing the geometry-velocity stability theory of discrete time-varying dynamic systems. Based on the above discussion, an adaptive CFO (ACFO) algorithm is proposed in this paper.

To the best of our knowledge, there is no research on stability analysis of the CFO algorithm till now. In this paper, the stability of the ACFO algorithm is analyzed based on discrete time-varying dynamic systems theory. Based on the stability analysis of the proposed algorithm, we explore the weight and gravitational constant settings.

The rest of this paper is organized as follows. Section 2 will present the basics of the CFO algorithm and review the state of the art concerning the algorithm. In Section 3, we propose an adaptive central force optimization. Some numerical results are given to test the performance of the proposed algorithm in Section 4. Finally, we have some conclusions about the proposed algorithm.

2. Central Force Optimization

CFO solves problem (1) based on the movement of probes through the decision space (DS) along trajectories computation by utilizing the gravitational analogy. The DS is defined by . In CFO, a group of probes are represented as potential solutions, and each probe is associated with position vector and acceleration vector at time step . The position of each probe is initialized by a variable initial probe distribution formed by deploying probes uniformly on each of the probe lines parallel to the coordinate axes and intersecting at a point along ’s principal diagonal, where is the total number of initial probes. The initial acceleration vectors are usually set to zero. During search process, the acceleration and position of probe are updated aswhere is the gravitational constant; is the fitness value at probe ’s position at time step ; and are the parameters; is the coordinate number; is the unit step function; is the unit time step increment; define . The probe generated by (3) may be beyond the DS. If the coordinate of the probe is less than , then it is assigned to beIf is greater than , thenwhere and are the minimum and maximum values for th component of the decision variable. is the reposition factor which starts at an arbitrary initial value and is incremented by an arbitrary amount at each iteration. If , then it is reset to the starting value. In order to improve convergence speed, the DS size is adaptively reduced around the best probe . The DS’s boundary coordinates are reduced as follows:The termination criterion is that iterations reach their maximum limit . We also terminate the CFO algorithm early if the difference between the average best fitness over steps (including the current step) and the current best fitness is less than .

In order to improve the CFO algorithm, Formato proposed modifications to CFO algorithm, namely, PR-CFO [12, 13]. The steps of PR-CFO algorithm [13] are shown as follows;For to step size is 2.For to by (a.1)compute initial probe distribution with distribution factor ;(a.2)compute initial fitness matrix; select the best probe fitness;(a.3)assign initial probe acceleration;(a.4)set initial .For to (or earlier termination criterion)(b)update probe positions using (3);(c)retrieve errant probe using (5) and (6);(d)calculate fitness values; select the best probe fitness;(e)compute accelerations using (2);(f)increment by ;if then ;End If(g)if MOD 10 = 0, thenshrink around best probe using (7);End IfNext (h)reset ’s boundaries to their starting values before shrinking.Next Next .PR-CFO is further modified in order to create an algorithm known as PF-CFO (Parameter Free CFO) [14, 15]. This version is almost identical to PR-CFO and compensates for the number of parameters that must be chosen by fixing a wide array of internal parameters at specific values [19]. The values of parameters borrowed from [19] that are used in PF-CFO algorithm can be seen in Table 1.

3. Adaptive Central Force Optimization Algorithm

Qian and Zhang [23] proposed an adaptive central force optimization algorithm. In [23], the time in (3) was updated based on the fitness value compared with the average fitness value. In this paper, we introduce adaptive weight in position update equation, adaptive gravitational constant in acceleration update equation and the velocity update formula. The weight and gravitational constant are updated based on this stability analysis of a discrete time-varying dynamic system. In ACFO algorithm, the position, acceleration, and velocity of probe are updated as follows:where is the weight; is a gravitational constant at probe ’s position at iteration ; ; and are the parameters; is the coordinate number; and is defined as follows:where is the Euclidian distance between two probes, and , and is a radius constant.

LetIt is clear that and are nonnegative. By (8), (9), and (10), the position and velocity updated equations can be written as follows:Equations (13) are written in matrix form as follows:

Let Equation (14) can be expressed as a discrete time-varying dynamic system as follows:

Lemma 1 (see [24]). Let be a discrete time-varying dynamic system; if satisfies the conditionunder a certain vector norm , then the system is geometry-velocity stable in the bounded set , where is constant and .

Cui and Zen presented a selection of the parameters in PSO algorithm based on Lemma 1 in [24]. Now we analyze the stability of ACFO algorithm and give a selection of weight and gravitational constant based on Lemma 1.

If in Lemma 1 is considered as an infinite norm, then we have

Since is a nonnegative finite number for any and is a bounded function, we assume that and . Thus, we can obtainBy (11), one has . Therefore, we have . Thus,In addition,

Set . Ifthen by Lemma 1, system (16) is geometry-velocity stable in the bounded set .

By (22), one has . If , that is, , then ; that is, . If , that is, , then ; that is, .

From the above discussion, in order to guarantee the geometry-velocity stability of system (16), parameters and are selected as follows:where is a random number in the interval . However, CFO algorithm is a deterministic method which should not contain any random nature. Therefore, we take parameters and as follows:where and are two constants between 0 and 1.

The specific iterative steps of ACFO algorithm are listed as follows.For to step size is 2.For to by (a.1)compute initial probe distribution with distribution factor ;(a.2)compute initial fitness matrix; select the best probe fitness;(a.3)assign initial probe’s accelerations and velocities;(a.4)set initial and .For to (or earlier termination criterion)(b)compute weights using (25);(c)update probe positions using (8);(d)retrieve errant probe using (5) and (6);(e)calculate fitness values; select the best probe fitness;(f)update gravitational constant using (24);(g)compute accelerations using (9) and velocities using (10);(h)increment by ;if then ;End If(i)if MOD 10 = 0, thenshrink around best probe using (7);End IfNext (l)reset ’s boundaries to their starting values before shrinking.Next Next .In ACFO algorithm, the initial acceleration and velocities vectors are set to zero and , where is the unit vector along the -axis.

4. Numerical Experiments

In this section, the performance of ACFO algorithm is compared with the existing algorithms, GSO, GA, PSO, PR-CFO, PF-CFO, CFO-NM, and EPSO, using a suite of the former twenty-three benchmark functions provided in [25]. In ACFO algorithm, internal parameters and . Other internal parameters are the same as the parameters of RF-CFO algorithm except parameter , which are listed in Table 1. We choose other parameters , , and . In our experiment, the codes were written in MATLAB 7.0 and run on PC with 2.00 GB RAM memory, 2.10 GHz CPU, and Windows 7 operation system. The stopping condition is that iterations reach their maximum limit . We also early stop the ACFO algorithm if the difference between the average best fitness over 30 steps (including the current step) and the current best fitness is less than .

In Table 2, , , , and stand for the test function, the dimension of decision space, the optimum minimum value for each function, and the total number of function evaluations, respectively. The statistical data in Table 2 for RP-CFO and RF-CFO are reproduced from [13] and [15], respectively. The best fitness is the optimum maximum (note the negative of each of the benchmark functions). The set of twenty-three benchmark functions are divided into unimodal functions ( to ), multimodal functions ( to ), and low-dimensional multimodal functions ( to ). Table 3 summarizes the obtained optimum minimum results which are compared with other optimization algorithms, such as GA, PSO, GSO, CFO-NM, and ECFO. The statistical data for CFO-NM and ECFO is from [16, 17] while the other statistical data is from [5]. In Tables 2 and 3, the star denotes that the numerical result is the best one among all the comparative algorithms. In Table 3, the symbol “—” means that the problem is not calculated in the original references.

From Table 2, it is clearly seen that the ACFO algorithm yields significantly better performance than PR-CFO algorithm on benchmark functions . But the ACFO algorithm has a worse result on and same result on compared to PR-CFO algorithm. From the comparisons between ACFO algorithm and PF-CFO algorithm, we can find that ACFO algorithm performs better than PF-CFO algorithm on and and obtains the same results yielded by PF-CFO algorithm on , , , , and . But it should be noted that both the ACFO and PF-CFO algorithm can obtain optimum minimum value of , , , and .

The set of benchmark functions are multimodal functions with many local minima. From Table 2, we can see that the ACFO algorithm outperforms PR-CFO and PF-CFO algorithm except functions , , and , but PR-CFO and PF-CFO algorithm are superior to ACFO algorithm on benchmark function by a very small percentage of and , respectively.

The other set of benchmark functions are low dimensions multimodal functions. From the comparison shown in Table 2, it can be obviously seen that the best fitness generated by ACFO, PR-CFO, and PF-CFO algorithm are almost the same on .

From Table 2, we can also see that ACFO algorithm is superior to the PR-CFO and PF-CFO algorithms for the total number of function evaluations except functions and .

In Table 3, we can clearly see that the ACFO algorithm outperforms GA, PSO, GSO, CFO-MN, and ECFO algorithms on test functions . The only exception is function in which the ECFO algorithm is superior to ACFO algorithm. It is seen that, for test functions , ACFO algorithm performs better than GA and PSO algorithms except function . In addition, ACFO algorithm outperforms GSO, CFO-NM, and ECFO on functions . For functions , we can also find that the ACFO algorithm generates better results than the GA and PSO. The only exception is function in which the ACFO algorithm yields a similar result compared to PSO. From the comparisons between ACFO and GSO algorithm, we can see that the ACFO algorithm outperforms GSO algorithm on functions and has similar results to GSO algorithm on functions . In addition, ACFO algorithm has also similar result to CFO-NM algorithm on functions . From the comparisons between ACFO and other algorithms, it is found that the ACFO algorithm performs better than the other algorithms.

Figures 17 show only convergence curves of PR-CFO, PF-CFO, and ACFO algorithms on . The vertical axis is the logarithmic function value of (1 + best function value), and horizontal axis is the number of iterates. From Figures 16, we can obviously see that ACFO algorithm tends to find the global optimum faster than other algorithms and hence has a higher convergence rate. According to Figure 7, ACFO and PR-CFO algorithms have similar convergence rates, but ACFO algorithm has good convergence rate compared with PF-CFO algorithm.

5. Conclusion

This paper presents ACFO algorithm which enhances the convergence capability of the CFO algorithm. The ACFO algorithm introduces a weight and updates the equation that generates the probe’s position. Based on the stability theory of discrete time-varying dynamic systems, we define adaptive weight and gravitational constant. In order to test ACFO algorithm performance, ACFO algorithm is compared with improved CFO algorithms and other state-of-the-art algorithms. The simulation results show that the ACFO is better than other algorithms.

Conflict of Interests

The authors declare that there is no conflict of interests regarding the publication of this paper.

Acknowledgments

The authors would like to thank the editor and anonymous referees for their valuable comments and suggestions which improved this paper greatly. This work is partly supported by the National Natural Science Foundation of China (11371071) and Scientific Research Foundation of Liaoning Province Educational Department (L2013426).