Mathematical Modeling and Analysis of Soft ComputingView this Special Issue
Research Article | Open Access
Flower Pollination Algorithm with Dimension by Dimension Improvement
Flower pollination algorithm (FPA) is a new nature-inspired intelligent algorithm which uses the whole update and evaluation strategy on solutions. For solving multidimension function optimization problems, this strategy may deteriorate the convergence speed and the quality of solution of algorithm due to interference phenomena among dimensions. To overcome this shortage, in this paper a dimension by dimension improvement based flower pollination algorithm is proposed. In the progress of iteration of improved algorithm, a dimension by dimension based update and evaluation strategy on solutions is used. And, in order to enhance the local searching ability, local neighborhood search strategy is also applied in this improved algorithm. The simulation experiments show that the proposed strategies can improve the convergence speed and the quality of solutions effectively.
In recent years, more and more bioinspired algorithms are proposed, such as genetic algorithm (GA) , simulated annealing (SA) , particle swarm optimization (PSO) , firefly algorithm (FA) , glowworm swarm optimization (GSO) , monkey search (MS) , bacterial foraging optimization algorithm (BFOA) , invasive weed optimization (IWO) , cultural algorithms (CA) , and harmony search (HS) . Because of their advantages of global and parallel efficiency, robustness, and universality, swarm intelligence algorithms have been widely used in engineering optimization, scientific computing, automatic control, and other fields.
Flower pollination algorithm (proposed by Yang in 2012)  is a new population-based intelligent optimization algorithm by simulating flower pollination behavior. And FPA has been extensively researched to solve Integer Programming Problems , Sudoku Puzzles , and Wireless Sensor Network Lifetime Global Optimization  in the last two years by scholars. It is estimated that there are over 250,000 types of flowering plants in nature. And researchers of biology considered that almost four-fifths of all plant species are flowering species. Flower pollination behavior stems from the purpose of reproduction. From the biological evolution point of view, the objective of flower pollination is the survival of the fittest and the optimal reproduction of species. All these factors and processes of flower pollination interact so as to achieve optimal reproduction of the flowering plants. In nature, pollination can be divided into two parts: abiotic and biotic. Almost 90% pollen grains are transferred by insects and animals; we call this biotic pollination. The other 10% pollen grains are transferred by wind [15, 16]. They do not need pollinators. And we call this form abiotic pollination. Pollinators can be very diverse; researches show almost 200,000 kinds of pollinators.
Self-pollination and cross-pollination are two different ways of pollination . Cross-pollination means pollination can occur from pollen of a flower of a different plant, and self-pollination is just the opposite. Biotic, cross-pollination can occur at long distance; the pollinators such as bees, bats, birds can fly a long distance; thus they can be considered as the global pollinators. And these pollinators can fly as Lévy flight behavior , with fly distance steps obeying a Lévy distribution. Thus, this can inspire to design new optimization algorithm. Flower pollination algorithm is an optimization algorithm which simulates the flower pollination behavior mentioned above; flower pollination algorithm can also be divided into global pollination process and local pollination process.
2. FPA with Dimension by Dimension Improvement
In order to enhance the global searching and local searching abilities, we applied three optimization strategies to basic flower pollination algorithm (FPA); those were local neighborhood searching strategy (LNSS) , dimension by dimension evaluation and improvement strategy (DDEIS), and dynamic switching probability strategy (DSPS).
2.1. Local Neighborhood Search Strategy (LNSS)
FPA (developed by Yang and Deb) uses differential evolution (DE) algorithm  to do local search. And experiment results show that the local search ability of DE is limited. Thus, we add LNSS to local search process to enhance its exploitation ability.
Firstly, we should explain a model (local neighborhood model). In this model, each vector uses the best vector of only a small neighborhood rather than the entire population to do the mutation. We suppose that there exists a differential evolutionary population , and each is a parameter vector, and its dimension is . Each vector subscript index is randomly divided to ensure the diversity of each neighborhood. For each vector , we can define a neighborhood, and the radius is . The neighborhood consists of vector . Assume that the vectors accord the subscript indices in a ring topology structure. We can take and as two direct neighbors of . The concept of local neighborhood model is shown in Figures 1 and 2. The neighborhood topology here is static and determined by the collection of vector subscript indices. And the local neighborhood model can be expressed in the following formula: where is the best vector of neighborhood, , and are two scale factors.
2.2. Dimension by Dimension Evaluation and Improvement Strategy (DDEIS)
Flower pollination algorithm uses the whole update and evaluation strategy on solutions. For solving multidimensional function optimization problems, this strategy may deteriorate the convergence speed and the quality of solution due to interference phenomena among dimensions. To overcome this shortage, we add this strategy to FPA in local search process.
In FPA, Lévy flight can improve the diversity of population and strengthen the global search ability of the algorithm. But for multidimensional objective function, overall update evaluation strategy will affect the convergence rate and quality of solutions. DDEIS updates dimension by dimension. Assume that objective function is and is a solution to.
The objective value. We use formula (1) to update and get . For example, when first dimension value of updates from 0.5 to 0, combined with the value of other dimensions, we can get a new . The objective value ; it can improve current solution. Thus, we accept this update and update operation into the next dimension. If first dimension value of updates from 0.5 to 1, we can get a new . The objective value , it fails to improve , and we should abandon the current dimension updated value and update operation into the next dimension. The strategy is described in Algorithm 1.
2.3. Dynamic Switching Probability Strategy (DSPS)
In FPA, local search and global search are controlled by a switching probability , and it is a constant value. We suppose that a reasonable algorithm should do more global search at the beginning of searching process and global search should be less in the end. Thus, we applied the dynamic switching probability strategy (DSPS) to adjust the proportion of two kinds of searching process. Switching probability can alter according to the following formula: where is the maximum iterations of the DDIFPA and is current iteration. Specific implementation steps of FPA with dimension by dimension improvement (DDIFPA) can be summarized in the pseudocode shown in Algorithm 2.
3. Numerical Simulation Experiments
In this section, we applied 12 standard test functions  to evaluate the optimal performance of FPA with dimension by dimension improvement (DDIFPA). The mean and standard deviation results of 20 independent runs for each algorithm have been summarized in Table 2. The 12 standard benchmark functions have been widely used in the literature. The dimensions, scopes, optimal values, and iterations of 12 functions are in Table 1. We also do some high-dimensional tests, and the results are showed in Table 3.
3.1. Experimental Setup
All of the algorithm was programmed in MATLAB R2012a; numerical experiment was set up on AMD Athlont (tm) processor and 2 GB memory.
3.2. Comparison of Each Algorithm Performance
The proposed DDIFPA algorithm is compared with mainstream swarm intelligence algorithms FPA , PSO , DE , RCBBO , GSA , FA , CS , and ABC , respectively, using the mean and standard deviations to compare their optimal performances. The setting values of algorithm control parameters of the mentioned algorithms are given as follows.
PSO parameters setting: weight factor , . The population size is 100 .
DE parameters setting: and CR = 0.9 in accordance with the suggestions given in ; the population size is 100.
ABC parameters setting: limit = 5D has been used as recommended in ; the population size is 50 because this algorithm has two phases.
RCBBC parameters setting: maximum immigration rate: I = 1, maximum emigration rate: , and mutation probability: have been used as recommended in ; the population size is 100.
CS parameters setting: and have been used as recommended in ; the population size is 50 because this algorithm has two phases.
GSA parameters setting: and which is set to NP and is decreased linearly to 1 have been used as recommended in ; the population size is 100.
FA parameters setting: , , and have been used as recommended in ; the population size is 100.
FPA parameters setting: the population size is 50 because this algorithm has two phases ;
DDIFPA parameters setting: the population size is 50 because this algorithm has two phases.
From the rank of each function in Table 2, we can conclude that DDIFPA provides many of the best results are better than FPA and other algorithms, especially for functions , , and . For the mean and standard deviation of DDIFPA are much higher than FPA. For , the mean and standard deviation of DDIFPA are 117 orders of magnitude higher than GSA and 127 orders of magnitude higher than FPA. For , DDIFPA and FPA fail to give the best optimal solution. For , the mean and standard deviation of DDIFPA are 2 orders of magnitude higher than FPA.
Figures 2 and 3 show the graphical analysis results of ANOVA test. As can be seen in Figure 2, when solving function , most of the algorithms can obtain the stable optimal value after 20 independent runs except RCBBO algorithm, and, in Figure 3, when solving the function , DDIFPA is more stable than other algorithms.
Figures 4 and 5 show the fitness function curve evolution of each algorithm for and . From the two figures, we can conclude that DDIFPA has a faster convergence rate and a higher optimizing precision.
For multimodal functions with many local minima, the final results are more important because these functions can reflect the ability of algorithm to escape from poor local optima and obtain the global optimum.
As can be seen in Table 2, for and , DDIFPA are in first place; ABC achieve the optimal value when solving . For the mean and standard deviation of DDIFPA are 15 orders of magnitude higher than FPA. For , ABC and DDIFPA all achieve the optimal value and the standard deviations are all 0. For , the mean of DDIFPA is 11 orders of magnitude higher than ABC, and the standard deviation of DDIFPA is 27 orders of magnitude higher than ABC.
Figures 6 and 7 show the graphical analysis results of the ANOVA tests. Figure 6 shows that RCBBO, ABC, and DDIFPA can obtain the relatively stable optimal values. Figure 7 shows that when solving function , most of the algorithms can obtain the stable optimal value after 20 independent runs.
Figures 8 and 9 show the fitness function curve evolution. From Figure 9, we can conclude that both ABC and DDIFPA converge to the optimal solution. From Figure 9, we can conclude that DDIFPA converges to a more precise point than other algorithms, and its convergence speed is faster.
From Table 2, and are multimodal low-dimensional functions. For , the solutions of most of the algorithms are accurate in 3 to 4 decimal places, and the rank of DDIFPA is second. For , the rank is second too, and the experiment results show that DDIFPA can do a good job in solving multimodal low-dimensional problems.
3.3. Experimental Analysis
We have carried out benchmark validations for unimodal and multimodal test functions using the proposed algorithm (DDIFPA) with three improvement strategies (local neighborhood search strategy, dimension by dimension evaluation and improvement strategy, and dynamic switching probability strategy). An optimization process can be divided into two key components (local search and global search); we use a dynamic switching probability to control the whole searching process. LNSS and DDEIS are applied to the local search process and enhance its exploitation ability. Among 12 test functions listed above, are unimodal, and the remarkable achievements confirm that DDIFPA have stronger exploitation ability than FPA and other algorithms. And DSPS, which could improve the ability of escape from poor local optima, was applied to enhance the exploration ability. That also balanced exploitation and exploration dynamically. For multimodal benchmark functions (), we can conclude that DDIFPA converges to a more precise point than other algorithms, and its convergence speed is faster. Our simulation results for finding the global optima of various test functions suggest that DDIFPA can outperform the FPA and other mentioned algorithms in terms of both precision and convergence speed.
3.4. High-Dimensional Functions Test
In previous sections, 12 standard test functions are applied to evaluate the optimal performances of the FPA with dimension by dimension improvement (DDIFPA) in the case of low dimension. In order to evaluate the performances of DDIFPA comprehensively, we also do some high-dimensional tests in . The test results are shown in Table 3. As can be seen in Table 3, DDIFPA can also solve high-dimensional problems efficiently and stably.
In this paper, three optimization strategies (local neighborhood search strategy, dimension by dimension evaluation and improvement strategy, and dynamic switching probability strategy) have been applied to FPA to improve its deficiencies. By 12 typical standard benchmark functions simulation, the results show that DDIFPA algorithm generally has strong global searching ability and local optimization ability, and effectively avoid the defects of other algorithms fall into local optimization. DDIFPA has improved the convergence speed and convergence precision of FPA. The experiment results show that it is an effective algorithm to solve complex functions optimization problems.
In this paper, we only consider the global optimization. The algorithm can be extended to solve other problems such as constrained optimization problems and multiobjective optimization problem. In addition, many engineering design problems are typically difficult to solve. The application of the proposed FPA with dimension by dimension improvement in engineering design optimization may prove fruitful.
Conflict of Interests
The authors declare that there is no conflict of interests regarding the publication of this paper.
This work is supported by the National Science Foundation of China under Grant nos. 61165015 and 61463007, the Key Project of Guangxi Science Foundation under Grant no. 2012GXNSFDA053028, and the Key Project of Guangxi High School Science Foundation under Grant nos. 20121ZD008 and 201203YB072.
- J. H. Holland, Adaptation in Natural and Artificial Systems, MIT Press, Cambridge, Mass, USA, 1992.
- S. Kirkpatrick Jr., C. D. Gelatt, and M. P. Vecchi, “Optimization by simulated annealing,” Science, vol. 220, no. 11, pp. 650–671, 1983.
- J. Kennedy and R. Eberhart, “Particle swarm optimization,” in Proceedings of the IEEE International Conference on Neural Networks, vol. 4, pp. 1942–1948, Perth, Australia, 1995.
- X.-S. Yang, “Multiobjective firefly algorithm for continuous optimization,” Engineering with Computers, vol. 29, no. 2, pp. 175–184, 2013.
- Z. Yongquan, L. Jiakun, and Z. Guangwei, “Leader glowworm swarm optimization algorithm for solving nonlinear equations systems,” Przeglad Elektrotechniczny, vol. 88, no. 1, pp. 101–106, 2012.
- A. Mucherino and O. Seref, “Monkey search: a novel metaheuristic search for global optimization,” in Proceedings of the Conference on Data Mining, Systems Analysis, and Optimization in Biomedicine, pp. 162–173, Gainesville, Fla, USA, March 2007.
- K. M. Passino, “Biomimicry of bacterial foraging for distributed optimization and control,” IEEE Control Systems Magazine, vol. 22, no. 3, pp. 52–67, 2002.
- A. R. Mehrabian and C. Lucas, “A novel numerical optimization algorithm inspired from weed colonization,” Ecological Informatics, vol. 1, no. 4, pp. 355–366, 2006.
- R. G. Reynolds, “Cultural algorithms: theory and applications,” in New Ideas in Optimization, D. W. Corne, M. Dorigo, and F. Glover, Eds., pp. 367–378, McGraw-Hill, Maidenhead, UK, 1999.
- B. Alatas, “Chaotic harmony search algorithms,” Applied Mathematics and Computation, vol. 216, no. 9, pp. 2687–2699, 2010.
- X. S. Yang, “Flower pollination algorithm for global optimization,” in Unconventional Computation and Natural Computation, vol. 7445 of Lecture Notes in Computer Science, pp. 240–249, 2012.
- I. El-Henawy and M. Ismail, “An improved chaotic flower pollination algorithm for solving large integer programming problems,” International Journal of Digital Content Technology and Its Applications, vol. 8, no. 3, pp. 72–81, 2014.
- O. Abdel Raouf, I. El-henawy, and M. Abdel-Baset, “A novel hybrid flower pollination algorithm with chaotic harmony search for solving sudoku puzzles,” International Journal of Modern Education and Computer Science, vol. 3, pp. 38–44, 2014.
- M. Sharawi, E. Emary, I. A. Saroit, and H. El-Mahdy, “Flower pollination optimization algorithm for wireless sensor network lifetime global optimization,” International Journal of Soft Computing and Engineering, vol. 4, no. 3, pp. 54–59, 2014.
- Wikipedia article on pollination, http://en.wikipedia.org/wiki/Pollination.
- B. J. Glover, Understanding Flowers and Flowering: An Integrated Approach, Oxford University Press, 2007.
- X. S. Yang and S. Deb, “Eagle strategy using Lévy walk and firefly algorithms for stochastic optimization,