Mathematical Modeling and Analysis of Soft ComputingView this Special Issue
Research Article | Open Access
Rui Wang, Yongquan Zhou, "Flower Pollination Algorithm with Dimension by Dimension Improvement", Mathematical Problems in Engineering, vol. 2014, Article ID 481791, 9 pages, 2014. https://doi.org/10.1155/2014/481791
Flower Pollination Algorithm with Dimension by Dimension Improvement
Flower pollination algorithm (FPA) is a new nature-inspired intelligent algorithm which uses the whole update and evaluation strategy on solutions. For solving multidimension function optimization problems, this strategy may deteriorate the convergence speed and the quality of solution of algorithm due to interference phenomena among dimensions. To overcome this shortage, in this paper a dimension by dimension improvement based flower pollination algorithm is proposed. In the progress of iteration of improved algorithm, a dimension by dimension based update and evaluation strategy on solutions is used. And, in order to enhance the local searching ability, local neighborhood search strategy is also applied in this improved algorithm. The simulation experiments show that the proposed strategies can improve the convergence speed and the quality of solutions effectively.
In recent years, more and more bioinspired algorithms are proposed, such as genetic algorithm (GA) , simulated annealing (SA) , particle swarm optimization (PSO) , firefly algorithm (FA) , glowworm swarm optimization (GSO) , monkey search (MS) , bacterial foraging optimization algorithm (BFOA) , invasive weed optimization (IWO) , cultural algorithms (CA) , and harmony search (HS) . Because of their advantages of global and parallel efficiency, robustness, and universality, swarm intelligence algorithms have been widely used in engineering optimization, scientific computing, automatic control, and other fields.
Flower pollination algorithm (proposed by Yang in 2012)  is a new population-based intelligent optimization algorithm by simulating flower pollination behavior. And FPA has been extensively researched to solve Integer Programming Problems , Sudoku Puzzles , and Wireless Sensor Network Lifetime Global Optimization  in the last two years by scholars. It is estimated that there are over 250,000 types of flowering plants in nature. And researchers of biology considered that almost four-fifths of all plant species are flowering species. Flower pollination behavior stems from the purpose of reproduction. From the biological evolution point of view, the objective of flower pollination is the survival of the fittest and the optimal reproduction of species. All these factors and processes of flower pollination interact so as to achieve optimal reproduction of the flowering plants. In nature, pollination can be divided into two parts: abiotic and biotic. Almost 90% pollen grains are transferred by insects and animals; we call this biotic pollination. The other 10% pollen grains are transferred by wind [15, 16]. They do not need pollinators. And we call this form abiotic pollination. Pollinators can be very diverse; researches show almost 200,000 kinds of pollinators.
Self-pollination and cross-pollination are two different ways of pollination . Cross-pollination means pollination can occur from pollen of a flower of a different plant, and self-pollination is just the opposite. Biotic, cross-pollination can occur at long distance; the pollinators such as bees, bats, birds can fly a long distance; thus they can be considered as the global pollinators. And these pollinators can fly as Lévy flight behavior , with fly distance steps obeying a Lévy distribution. Thus, this can inspire to design new optimization algorithm. Flower pollination algorithm is an optimization algorithm which simulates the flower pollination behavior mentioned above; flower pollination algorithm can also be divided into global pollination process and local pollination process.
2. FPA with Dimension by Dimension Improvement
In order to enhance the global searching and local searching abilities, we applied three optimization strategies to basic flower pollination algorithm (FPA); those were local neighborhood searching strategy (LNSS) , dimension by dimension evaluation and improvement strategy (DDEIS), and dynamic switching probability strategy (DSPS).
2.1. Local Neighborhood Search Strategy (LNSS)
FPA (developed by Yang and Deb) uses differential evolution (DE) algorithm  to do local search. And experiment results show that the local search ability of DE is limited. Thus, we add LNSS to local search process to enhance its exploitation ability.
Firstly, we should explain a model (local neighborhood model). In this model, each vector uses the best vector of only a small neighborhood rather than the entire population to do the mutation. We suppose that there exists a differential evolutionary population , and each is a parameter vector, and its dimension is . Each vector subscript index is randomly divided to ensure the diversity of each neighborhood. For each vector , we can define a neighborhood, and the radius is . The neighborhood consists of vector . Assume that the vectors accord the subscript indices in a ring topology structure. We can take and as two direct neighbors of . The concept of local neighborhood model is shown in Figures 1 and 2. The neighborhood topology here is static and determined by the collection of vector subscript indices. And the local neighborhood model can be expressed in the following formula: where is the best vector of neighborhood, , and are two scale factors.
2.2. Dimension by Dimension Evaluation and Improvement Strategy (DDEIS)
Flower pollination algorithm uses the whole update and evaluation strategy on solutions. For solving multidimensional function optimization problems, this strategy may deteriorate the convergence speed and the quality of solution due to interference phenomena among dimensions. To overcome this shortage, we add this strategy to FPA in local search process.
In FPA, Lévy flight can improve the diversity of population and strengthen the global search ability of the algorithm. But for multidimensional objective function, overall update evaluation strategy will affect the convergence rate and quality of solutions. DDEIS updates dimension by dimension. Assume that objective function is and is a solution to.
The objective value. We use formula (1) to update and get . For example, when first dimension value of updates from 0.5 to 0, combined with the value of other dimensions, we can get a new . The objective value ; it can improve current solution. Thus, we accept this update and update operation into the next dimension. If first dimension value of updates from 0.5 to 1, we can get a new . The objective value , it fails to improve , and we should abandon the current dimension updated value and update operation into the next dimension. The strategy is described in Algorithm 1.
2.3. Dynamic Switching Probability Strategy (DSPS)
In FPA, local search and global search are controlled by a switching probability , and it is a constant value. We suppose that a reasonable algorithm should do more global search at the beginning of searching process and global search should be less in the end. Thus, we applied the dynamic switching probability strategy (DSPS) to adjust the proportion of two kinds of searching process. Switching probability can alter according to the following formula: where is the maximum iterations of the DDIFPA and is current iteration. Specific implementation steps of FPA with dimension by dimension improvement (DDIFPA) can be summarized in the pseudocode shown in Algorithm 2.
3. Numerical Simulation Experiments
In this section, we applied 12 standard test functions  to evaluate the optimal performance of FPA with dimension by dimension improvement (DDIFPA). The mean and standard deviation results of 20 independent runs for each algorithm have been summarized in Table 2. The 12 standard benchmark functions have been widely used in the literature. The dimensions, scopes, optimal values, and iterations of 12 functions are in Table 1. We also do some high-dimensional tests, and the results are showed in Table 3.
3.1. Experimental Setup
All of the algorithm was programmed in MATLAB R2012a; numerical experiment was set up on AMD Athlont (tm) processor and 2 GB memory.
3.2. Comparison of Each Algorithm Performance
The proposed DDIFPA algorithm is compared with mainstream swarm intelligence algorithms FPA , PSO , DE , RCBBO , GSA , FA , CS , and ABC , respectively, using the mean and standard deviations to compare their optimal performances. The setting values of algorithm control parameters of the mentioned algorithms are given as follows.
PSO parameters setting: weight factor , . The population size is 100 .
DE parameters setting: and CR = 0.9 in accordance with the suggestions given in ; the population size is 100.
ABC parameters setting: limit = 5D has been used as recommended in ; the population size is 50 because this algorithm has two phases.
RCBBC parameters setting: maximum immigration rate: I = 1, maximum emigration rate: , and mutation probability: have been used as recommended in ; the population size is 100.
CS parameters setting: and have been used as recommended in ; the population size is 50 because this algorithm has two phases.
GSA parameters setting: and which is set to NP and is decreased linearly to 1 have been used as recommended in ; the population size is 100.
FA parameters setting: , , and have been used as recommended in ; the population size is 100.
FPA parameters setting: the population size is 50 because this algorithm has two phases ;
DDIFPA parameters setting: the population size is 50 because this algorithm has two phases.
From the rank of each function in Table 2, we can conclude that DDIFPA provides many of the best results are better than FPA and other algorithms, especially for functions , , and . For the mean and standard deviation of DDIFPA are much higher than FPA. For , the mean and standard deviation of DDIFPA are 117 orders of magnitude higher than GSA and 127 orders of magnitude higher than FPA. For , DDIFPA and FPA fail to give the best optimal solution. For , the mean and standard deviation of DDIFPA are 2 orders of magnitude higher than FPA.
Figures 2 and 3 show the graphical analysis results of ANOVA test. As can be seen in Figure 2, when solving function , most of the algorithms can obtain the stable optimal value after 20 independent runs except RCBBO algorithm, and, in Figure 3, when solving the function , DDIFPA is more stable than other algorithms.
Figures 4 and 5 show the fitness function curve evolution of each algorithm for and . From the two figures, we can conclude that DDIFPA has a faster convergence rate and a higher optimizing precision.
For multimodal functions with many local minima, the final results are more important because these functions can reflect the ability of algorithm to escape from poor local optima and obtain the global optimum.
As can be seen in Table 2, for and , DDIFPA are in first place; ABC achieve the optimal value when solving . For the mean and standard deviation of DDIFPA are 15 orders of magnitude higher than FPA. For , ABC and DDIFPA all achieve the optimal value and the standard deviations are all 0. For , the mean of DDIFPA is 11 orders of magnitude higher than ABC, and the standard deviation of DDIFPA is 27 orders of magnitude higher than ABC.
Figures 6 and 7 show the graphical analysis results of the ANOVA tests. Figure 6 shows that RCBBO, ABC, and DDIFPA can obtain the relatively stable optimal values. Figure 7 shows that when solving function , most of the algorithms can obtain the stable optimal value after 20 independent runs.