Table of Contents Author Guidelines Submit a Manuscript
Mathematical Problems in Engineering
Volume 2013 (2013), Article ID 108768, 9 pages
http://dx.doi.org/10.1155/2013/108768
Research Article

LGMS-FOA: An Improved Fruit Fly Optimization Algorithm for Solving Optimization Problems

School of Economics and Business Administration, Chongqing University, Chongqing 400030, China

Received 16 May 2013; Revised 1 August 2013; Accepted 18 August 2013

Academic Editor: Yudong Zhang

Copyright © 2013 Dan Shan et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Abstract

Recently, a new fruit fly optimization algorithm (FOA) is proposed to solve optimization problems. In this paper, we empirically study the performance of FOA. Six different nonlinear functions are selected as testing functions. The experimental results illustrate that FOA cannot solve complex optimization problems effectively. In order to enhance the performance of FOA, an improved FOA (named LGMS-FOA) is proposed. Simulation results and comparisons of LGMS-FOA with FOA and other metaheuristics show that LGMS-FOA can greatly enhance the searching efficiency and greatly improve the searching quality.

1. Introduction

Optimization problems are used extensively in science, engineering, and finance. How to solve these problems has always been a concern of researchers. In the past decade, stochastic optimization algorithms have been used to solve these problems due to their flexibility for finding solutions. These algorithms include genetic algorithm (GA) [13], simulated annealing (SA) [4, 5], ant colony optimization algorithm [5, 6], and particle swarm optimization algorithm (PSO) [79]. However, the common disadvantages of these stochastic algorithms are complicated computational processes and difficulty of understanding for beginners.

Recently, a new stochastic optimization technique, fruit fly optimization algorithm (FOA), is proposed by Pan [10]. Its development is based on the food finding behavior of the fruit fly. Compared with other stochastic algorithms, FOA has the advantages of being easy to understand and a simple computational process. As a novel optimization algorithm, FOA has gained much attention [10, 11] in recent years.

In order to study the performance of FOA, six famous nonlinear functions are selected as testing functions. Simulation results illustrate that FOA cannot solve the complex optimization problems effectively. Analysis of FOA shows that FOA includes a nonlinear generation mechanism of candidate solution. This mechanism is abbreviated as NGMS and has some disadvantages which limit the performance of FOA. In order to enhance the performance of FOA, NGMS is first replaced with a linear generation mechanism of candidate solution (abbreviated as LGMS), and then a LGMS-based improved FOA (abbreviated as LGMS-FOA) is proposed. Simulation results and comparisons of LGMS-FOA with FOA and other metaheuristics show that LGMS-FOA is more effective and reliable.

The rest of this paper is organized as follows. Section 2 introduces FOA. Section 3 introduces LGMS-FOA. Section 4 provides comparisons of FOA with LGMS-FOA and other metaheuristics. Section 5 concludes this paper.

2. FOA

2.1. Overview of FOA

FOA is a new method for finding global optimization based on the food finding behavior of the fruit fly. The fruit fly is superior to other species in vision and osphresis (as illustrated in Figure 1). The food finding process of fruit fly has two steps: firstly, it smells the food source by using osphresis organ and flies towards that direction; then, after it gets close to the food location, it can also use its sensitive vision to find food and fruit flies' flocking location and flies towards that direction. Figure 2 shows the food finding iterative process of fruit fly swarm.

108768.fig.001
Figure 1: Fruit fly.
108768.fig.002
Figure 2: Food finding iterative process of fruit fly swarm.

Based on the food finding characteristics of fruit fly swarm, the whole procedure of FOA is described as follows.

Step 1 (parameters initialization). The main parameters of FOA are the maximum iteration number (), the population size (), the random initialization fruit fly swarm location range (), and the random fly direction and distance zone of fruit fly ().

Step 2. Nonlinear generation mechanism of candidate solution:

Step 2.1.  Initial fruit fly swarm location,

Step 2.2.  Give the random direction and distance for food finding of an individual fruit fly using osphresis:

Step 2.3.  Calculate the distance of food location to the origin:

Step 2.4.  Calculate the smell concentration judgment value ():

Remark 1. In fact, is a candidate solution in the domain.

Remark 2. According to (1)–(4) Equation (5) is called NGMS.

Step 3. Calculate the smell concentration () of the individual fruit fly location by inputing the smell concentration judgment value () into the smell concentration judgment function (also called objective function):

Step 4. Find out the fruit fly with maximal smell concentration among the fruit fly swarm:

Step 5. Keep the maximal concentration value and , coordinate. Then, the fruit fly swarm flies towards that location by using vision:

Step 6. Enter iterative optimization to repeat the implementation of Steps 25. When the smell concentration is not superior to the previous iterative smell concentration any more or the iterative number reaches the maximal iteration number, the circulation stops.

2.2. Computational Experiments of FOA

In order to study the performance of FOA, six different nonlinear functions are selected as testing functions [12]. Table 1 shows the name, dimension, optimal solution, and extreme point of them. For details refer to Appendix A.

tab1
Table 1: Testing function.

Since FOA is a stochastic optimization algorithm, the solution found each time may not be the same; therefore, each function is repeated 100 times. If the final searching quality is within of the optimal value, the run is called a success run and its iteration number will be stored. Two indexes named “percentage of success (PS)” and “average valid iteration number (AVIN)” are defined as follows [13]: where denotes the number of success runs among 100 runs, denotes the number of iteration of the th success run.

The parameters of FOA are maxgen = 300, sizepop = 50, , and .

Table 2 shows PS and AVIN of FOA when solving the six testing functions. From Table 2, the following can be seen that.(1)PS of , , and is always equal to zero no matter what values LR and FR are. It is known that the extreme points of the three functions are nonzero, so FOA cannot solve the problems when the extreme point is nonzero.(2)When the scopes of FR and LR become large, PS of , , and increase. It is known that the extreme points of the three functions are zero points, so FOA can solve the optimization problems when the extreme point is zero, if FR and LR are large enough.(3)The Average PS is very small no matter what values LR and FR are, so it is concluded that FOA cannot solve complex optimization problems effectively.

tab2
Table 2: Fixed-iteration results of FOA with different LR and FR.

2.3. Analysis of FOA

Through the analysis of (5), it can be found that NGMS has some disadvantages which limit the performance of FOA. The disadvantages are listed below.(1)FOA cannot solve the optimization problems when there exist negative numbers in the domain, because according to (5).(2)When the value of and is fixed, in (5) does not follow uniform distribution. (Proof is described in Appendix B.)Since does not follow uniform distribution, the candidate solution cannot be uniformly generated in the domain; That is to say, NGMS cannot allow the search to be performed uniformly in the domain, therefore fruit fly swarm loses its ability to search for a global optimum solution. That is why FOA cannot solve complex optimization problems effectively.(3)In (5), when the values of and are large and the scope of is small, the change of has little impact on the value of ; therefore it is easy for to fall into local optimal point.(4)In (1)-(2), with the increase of the scopes of and , the probability that the absolute value of and becomes large increases, so in (5) is easy to fall near zero point, and this can explain why FOA can solve the optimization problems when the extreme point is zero.

3. LGMS-FOA

3.1. Introduction of LGMS-FOA

In order to overcome the above disadvantages, NGMS is replaced with a new linear generation mechanism of candidate solution (abbreviated as LGMS), and a LGMS-based improved FOA (abbreviated as LGMS-FOA) is proposed. The steps of LGMS-FOA are listed below.

Step 1 (parameters initialization). The main parameters of LGMS-FOA are the maximum iteration number (), the population size (), the searching coefficient (), the initial weight (), and the weight coefficient ().

Step 2. Linear generation mechanism of candidate solution.

Step 2.1. Initial fruit fly swarm location:

Step 2.2. Give the random direction and distance for food finding of an individual fruit fly:

Step 2.3. Let the smell concentration judgment value () equal :

Remark 3. Equation (12) is called LGMS.

Step 3. Calculate the smell concentration () of the individual fruit fly location by input the smell concentration judgment value () into the smell concentration judgment function (also called objective function):

Step 4. Find out the fruit fly with maximal smell concentration among the fruit fly swarm:

Step 5. Keep the maximal concentration value and coordinate. Then, the fruit fly swarm flies towards that location by using vision:

Step 6. Enter iterative optimization to repeat the implementation of Steps 25. When the smell concentration is not superior to the previous iterative smell concentration any more or the iteration number reaches the maximal iteration number, the circulation stops.

The complete flowchart of LGMS-FOA is shown in Figure 3, which is listed in Appendix C.

108768.fig.003
Figure 3: The flowchart of LGMS-FOA.
3.2. Advantage of LGMS-FOA

Compared with NGMS, LGMS has some advantages which are listed below.(1)The range of in (12) can cover the whole scope of the domain.(2)When the value of is fixed, in (12) follows uniform distribution. So LGMS can allow the search to be performed uniformly in the domain; therefore fruit fly swarm enhances its ability to search for a global optimum solution.(3)In LGMS, a parameter called inertia weight is brought in to balance the global and local search. A large inertia weight facilitates a global search while a small inertia weight facilitates a local search. By decreasing the inertia weight from a large value to a small value, LGMS-FOA tends to have more global search ability at the beginning of the run while having more local search ability near the end of the run [8].

4. Numerical Simulation

The performances of LGMS-FOA and FOA are compared first, and then the performances of LGMS-FOA and other metaheuristics are compared.

4.1. Comparison of LGMS-FOA with FOA
4.1.1. Experimental Setup

In order to compare the performances of FOA and LGMS-FOA, the same six functions in Table 1 are used, and every function is repeated 100 times.

The parameters of LGMS-FOA are , , , , and .

The parameters of FOA are , , , and , because FOA with these parameters performs best in Table 2.

Remark 4. The function evaluation numbers of FOA and LGMS-FOA are the same in every iteration, because . This can ensure the fairness of comparison.

4.1.2. Experimental Results

Table 3 shows mean and standard deviation of LGMS-FOA and FOA for 100 independent runs. Figures 4, 5, 6, 7, 8, and 9 show the performance of FOA and LGMS-FOA for solving the six testing functions which are listed in Appendix C.

tab3
Table 3: Mean and standard deviation of LGMS-FOA and FOA.
108768.fig.004
Figure 4: Comparison of LGMS-FOA, FOA, PSO, and GA for .
108768.fig.005
Figure 5: Comparison of LGMS-FOA, FOA, PSO, and GA for .
108768.fig.006
Figure 6: Comparison of LGMS-FOA, FOA, PSO, and GA for .
108768.fig.007
Figure 7: Comparison of LGMS-FOA, FOA, PSO, and GA for .
108768.fig.008
Figure 8: Comparison of LGMS-FOA, FOA, PSO, and GA for .
108768.fig.009
Figure 9: Comparison of LGMS-FOA, FOA, PSO, and GA for .

From Table 3, it can be seen that mean of LGMS-FOA is much closer to the theoretical optima, and LGMS-FOA has better standard deviation than FOA when solving , , , , and . So it is concluded that LGMS-FOA is more effective and robust than FOA.

From Figures 49, it can be seen that the varying curves of objective values using LGMS-FOA descend much faster than those using FOA and the final searching quality of LGMS-FOA is better than FOA. So it is also concluded that LGMS-FOA is better than FOA.

4.1.3. Robustness Analysis

Table 4 shows PS and AVIN of FOA and LGMS-FOA when solving the six functions 100 times. From Table 4, it can be seen that LGMS-FOA can find global optima with very high PS for every function. Besides, for those valid runs, LGMS-FOA costs smaller AVIN than FOA. So, it is concluded that LGMS-FOA is more effective and reliable than FOA.

tab4
Table 4: Robustness analysis.

4.2. Comparison of LGMS-FOA with Other Metaheuristics

In order to further show the effectiveness of LGMS-FOA, we carry out some comparisons with several other metaheuristics, such as the standard PSO [8] and the modified GA [2].

4.2.1. Experimental Setting

In PSO [8], the iteration number is 300, the population size is 50, , , , , and is limited to be of the domain.

In GA [2], the iteration number is 300, the population is 50, Generation Gap (GGAP) is 0.8, stochastic universal sampling is used, the single dot cross operation with crossover probability is 0.7, and the discrete mutation with mutation probability is 0.1.

Testing functions are shown in Table 1, and every function is repeated 100 times.

Remark 5. The population sizes of GA, PSO, and LGMS-FOA are the same, so the function evaluation numbers of the three algorithms are also the same, and this can ensure the fairness of comparison.

4.2.2. Experimental Results and Discussion

Table 5 shows mean and standard deviation of GA and PSO of 100 independent runs. Table 6 shows PS of GA,PSO and LGMS-FOA of 100 independent runs. Figures 49 also show the performance of PSO and GA for solving the six testing functions.

tab5
Table 5: Mean and standard deviation of GA and PSO.
tab6
Table 6: Robustness analysis.

From the comparison between Tables 4 and 5, it can be found that mean and standard deviation of LGSM-FOA is better than GA when solving , , , , and . It can be also found that mean and standard deviation of LGSM-FOA is better than PSO when solving , , , and . So we can conclude that LGSM-FOA is more efficient than PSO and GA when the evaluation numbers of functions are the same.

From Figures 5, 6, 8, and 9, it can be seen that the final searching quality of LGMS-FOA is better than PSO and GA. From Figures 5 and 6, it can be seen that LGMS-FOA can greatly improve and speed up the convergence. So overall, the performance of LGMS-FOA is better than PSO and GA.

From Table 6, it can be found that LGMS-FOA can find global optima with higher PS than GA and PSO. So it is concluded that LGSM-FOA is more reliable.

5. Conclusion

This paper finds some disadvantages of FOA and proposes an improved FOA which is named LGMS-FOA. Simulations and comparisons of LGMS-FOA with FOA and other metaheuristics illustrate that LGMS-FOA is more effective and reliable. The future work is to apply LGMS-FOA for some real engineering optimization problems.

Appendices

A. Six Famous Nonlinear Functions Used in This Paper

(1) : Goldstein-Price, :

Global optimal solution: 3

Searching domain: , .

(2) : Branin, :

Global optimal solution: 0.397887

Searching domain: ,   

(3) : Rastrigin, :

Global optimal solution: −2

Searching domain: , .

(4) : Shuber, :

Global optimal solution: −186.7309

Searching domain: , .

(5) : Sphere, :

Global optimal solution: 0

Searching domain: ,

(6) : Sphere, :

Global optimal solution: 0

Searching domain: , .

B. Proof

Proposition B.1. If , ,, , where and are constants;, , where and are variables;, the domain of is (0,10).Then, does not follow uniform distribution.

Proof. For the sake of simplicity, we set and such that .
(Proof by reductio ad absurdum.) Conversely, we assume that is a random variable with uniform distribution on the interval (0,10); then .
Since that and are independent and both uniformly distributed on , so the joint distribution of and is . Hence, we have This contradicts with . Therefore the previous assumption is not satisfied, and does not follow uniform distribution.

C. Comparison of LGMS-FOA, FOA, PSO, and GA

See Figures 39.

Acknowledgment

This paper is supported by the National Natural Science Foundation of China (Grant no. 71232004).

References

  1. Y. D. Bertrand and D. Braba, “Feature selection by a genetic algorithm application to seed discrimination by artificial vision,” Journal of the Science of Food and Agriculture, vol. 76, pp. 77–86, 1998. View at Google Scholar
  2. Y.-J. Lei, S.-W. Zhang, X.-W. Li, and C.-M. Zhou, Matlab Genetic Algorithm Toolbox and Its Application, Xidian University Publishing House, Xi'an, China, 2005.
  3. M. B. Aryanezhad and M. Hemati, “A new genetic algorithm for solving nonconvex nonlinear programming problems,” Applied Mathematics and Computation, vol. 199, no. 1, pp. 186–194, 2008. View at Publisher · View at Google Scholar · View at Scopus
  4. S. Kirkpatrick, C. D. Gelatt, and M. P. Vecchi, “Optimization by simulated annealing,” Science, vol. 200, pp. 671–680, 1983. View at Google Scholar · View at Scopus
  5. Y. Jin and J. Branke, “Evolutionary optimization in uncertain environments—a survey,” IEEE Transactions on Evolutionary Computation, vol. 9, no. 3, pp. 303–317, 2005. View at Publisher · View at Google Scholar · View at Scopus
  6. M. Dorigo and L. M. Gambardella, “Ant colony system: a cooperative learning approach to the traveling salesman problem,” IEEE Transactions on Evolutionary Computation, vol. 1, no. 1, pp. 53–66, 1997. View at Publisher · View at Google Scholar · View at Scopus
  7. Y. Fukuyama and H. Yoshida, “A particle swarm optimization for reactive power and voltage control in electric power systems,” in Proceedings of the Congress on Evolutionary Computation, pp. 87–93, May 2001. View at Scopus
  8. Y.-H. Shi and R. C. Eberhart, “Empirical study of particle swarm optimization,” in Proceedings of IEEE International Conference on Evolutionary Computation, pp. 1945–1950, Washington, DC, USA, 1999.
  9. J.-Y. Wu, “Solving unconstrained global optimization problems via hybrid swarm intelligence approaches,” Mathematical Problems in Engineering, vol. 2013, Article ID 256180, 15 pages, 2013. View at Publisher · View at Google Scholar
  10. W.-T. Pan, “A new fruit fly optimization algorithm: taking the financial distress model as an example,” Knowledge-Based Systems, vol. 26, no. 2, pp. 69–74, 2012. View at Publisher · View at Google Scholar · View at Scopus
  11. H.-Z. Li, S. Guo, C.-J. Li, and J.-Q. Sun, “A hybrid annual power load forecasting model based on generalized regression neural network with fruit fly optimization algorithm,” Knowledge-Based Systems, vol. 37, pp. 378–387, 2013. View at Google Scholar
  12. H.-L. Shieh, C.-C. Kuo, and C.-M. Chiang, “Modified particle swarm optimization algorithm with simulated annealing behavior and its numerical verification,” Applied Mathematics and Computation, vol. 218, no. 8, pp. 4365–4383, 2011. View at Publisher · View at Google Scholar · View at Scopus
  13. B. Liu, L. Wang, Y.-H. Jin, F. Tang, and D.-X. Huang, “Improved particle swarm optimization combined with chaos,” Chaos, Solitons and Fractals, vol. 25, no. 5, pp. 1261–1271, 2005. View at Publisher · View at Google Scholar · View at Scopus