Mathematical Problems in Engineering

Mathematical Problems in Engineering / 2013 / Article
Special Issue

Intelligent Techniques for Simulation and Modelling

View this Special Issue

Research Article | Open Access

Volume 2013 |Article ID 308250 | 7 pages | https://doi.org/10.1155/2013/308250

Opposition-Based Animal Migration Optimization

Academic Editor: William Guo
Received25 Jul 2013
Accepted08 Sep 2013
Published29 Oct 2013

Abstract

AMO is a simple and efficient optimization algorithm which is inspired by animal migration behavior. However, as most optimization algorithms, it suffers from premature convergence and often falls into local optima. This paper presents an opposition-based AMO algorithm. It employs opposition-based learning for population initialization and evolution to enlarge the search space, accelerate convergence rate, and improve search ability. A set of well-known benchmark functions is employed for experimental verification, and the results show clearly that opposition-based learning can improve the performance of AMO.

1. Introduction

Many real world problems can be summarized as optimization problems. Optimization problems play an important role in both industrial application and scientific research. In the past decades, different optimization algorithms have been proposed. Among them, genetic algorithm may be the first and popular algorithm inspired by natural genetic variation and natural selection [1, 2]. Inspired by the social behavior of bird flocking or fish school, particle swarm algorithm was developed by Kennedy and Eberhart in 1995 [3, 4]. Artificial bee colony algorithm was proposed by Karaboga and Basturk in 2005, which simulates the foraging behavior of bee swarm [5, 6]. Ant colony optimization (ACO) which simulates the action of ants was first introduced by Dorigo [7, 8]. Animal Migration Optimization (AMO) as a new optimization algorithm inspired by animal migration behavior was first proposed by Li et al. in [9]. AMO simulates the widespread migration phenomenon in the animal kingdom, through the change of position, replacement of individual, and finding the optimal solution gradually. AMO has obtained good experimental results on many optimization problems.

Optimization algorithms often begin from an initial set of variables which are generated randomly and through iteration to obtain the global optimal solutions or the maximum of the objective function. It is known that the performance of these algorithms is highly related to diversity of particles; it may easily fall into local optima and has slow convergence rate and poor accuracy in the later stage of evolution. In recent years, many efforts have been done to improve the performance of different algorithms.

The concept of opposition-based learning (OBL) was first introduced by Tizhoosh [10]. The main idea is to consider current candidate solution and its opposite candidate in order to enlarge the search scope, and it also uses elite selection mechanism to speed up the convergence speed and find the optimal solution. It has been proved in [11], an opposite learning mechanism has more chance to be closer to the global optimum solution than a random candidate solution. The opposition-based learning idea has been successfully applied on GA [11] that PSO [1216], DE [17, 18], artificial neural networks (ANN) [19, 20], and ant colony optimization (ACO) [21]; experimental results show that opposition-based learning can improve the search capabilities of the algorithm to some extent.

This paper presents an algorithm to improve the performance of AMO. In the absence of priori information about the population, we employ opposition-based learning during population initialization and population evolution. Through the introduction of opposition-based learning mechanism, it can transform solutions from current search space to a new search space to enlarge the search space. By means of selecting the better solution between current solution and the opposite solution, it will improve search ability and accelerate convergence rate, and it has more chance to find the global optima.

The rest of this paper is organized as follows. Section 2 briefly introduces the animal migration optimization algorithm. Section 3 gives a simple description of opposition-based learning. Section 4 explains an implementation of the proposed algorithm, opposition-based AMO algorithm. Section 5 presents a comparative study among AMO, OPAMO, and other optimization algorithms on 23 benchmark problems. Finally, the work is concluded in Section 6.

2. Animal Migration Optimization

AMO is a new heuristic optimization algorithm inspired by the behavior of animal migration which is a ubiquitous phenomenon that can be found in all major animal groups, such as birds, mammals, fish, reptiles, amphibians, insects, and crustaceans [9].

In this algorithm, there are mainly two processes. In the first process the algorithm simulates how the groups of animals move from current position to a new position. During this process, each individual should obey three main rules: move in the same direction as its neighbors; remain close to its neighbors; avoid collisions with its neighbors. We select one neighbor randomly and update the position of the individual according to this neighbor, as can be seen in the following formula: where is the current position of the neighborhood, is produced using a random number generator controlled by a Gaussian distribution, is the current position of th individual, and is the new position of th individual.

In the following process, the algorithm simulates how new animals are introduced to the group during the migration. During the population updating process, some animals will leave the group and some new animals will join the new population. We assume that the number of available animals is fixed. The animals will be replaced by some new individuals with a probability Pa. The probability is used according to the quality of the fitness. For the best fitness, the probability Pa is 1/NP. For the worst fitness, the probability is 1. This process can be shown in Algorithm 1, where are randomly chosen integers and. After producing a new solution, it will be evaluated and compared with . If the objective fitness of is smaller than the fitness of , is accepted as a new basic solution; otherwise, would be obtained.

For to NP
For to D
  If rand > Pa
End if
End for
End for

To verify the performance of AMO, 23 benchmark functions chosen from the literature are employed. The results show that the proposed algorithm clearly outperforms some evolution algorithms from the literature.

3. Opposition-Based Learning

In the evolutionary algorithm, algorithm often starts from random initial population until a satisfactory solution is found. If no population prior information is known, the speed of evolution is relation to the distance between the initial particles and the best particle. If we select some initial particles close to the best individual, we can accelerate the convergence of the algorithm to some extent.

The opposition learning algorithm is a new type of reinforcement learning algorithm, and it has been proven to be an effective concept to enhance various optimization approaches [22]. This algorithm utilizes opposition learning mechanism to generate opposite population and employs elite selection to choose the individual closer to the best individual as the member of initial population, thus facilitating the overall evolutionary convergence speed.

For ease of description, we give the definition of the opposition point first.

Definition 1. Let be a real number; the opposite point is defined as
Similarly, the definition can be extended to high dimensional space.

Definition 2. Let as a point in -dimension space, where, for all ; the point is defined as the opposition value of , in which
Through the two previous definitions, we can conclude that the opposite learning algorithm is defined as follows.

Definition 3. is a candidate solution in -dimension space; assume is the fitness function which is used to evaluate the fitness of the candidate. According to the definition of the opposite point, is the opposite of . If is better than , then choose as candidate instead of ; otherwise, is obtained, through evaluating the fitness of and the opposite to get the better individual.

4. Enhanced AMO Using Opposition-Based Learning

4.1. Basic Concept

According to the probability principle, each randomly generated candidate solution has fifty percent probability chance away from the optimal solution compared to its opposition solution; AMO algorithm usually generates candidate solutions randomly in the search space as initial population because of the lack of prior information. During the optimization process, the calculation time of finding optimal solution will be changed according to the distance between the candidate solution and the optimal solution. By applying opposition-based learning, we not only evaluate the current candidate but also calculate its opposition candidate ; this will provide more chance of finding the solution closer to the global optimal value.

Let be a solution in current search space, ; the new solution in the opposite space is

From the above definition, we can infer. Obviously, through the opposite transformation, the center of search space is changed from to .

Let , where is a real number in , and it is often generated randomly; this denotes the probability of reverse learning.

4.2. Opposition-Based AMO Algorithm

In this paper, we introduce a novel algorithm which combines opposition learning algorithm with AMO algorithm. Through the introduction of opposition learning mechanism, we consider the candidate solution in both current search space and its opposite space simultaneously. By selecting the better individuals in the two search spaces, it can provide more chance to find the global optimal solution and largely speed up the convergence.

During the process of population initialization and evolution, through the introduction of opposition learning mechanism, more candidate solutions will be considered, and we choose the most likely candidate solutions for evolution.

We described the opposition-based population initial process in detail.(1)Initialize the population in the search space randomly.(2)Calculate opposition population according to initial population; each dimension is calculated as follows: where and denote the th variable of the th vector of the population and opposite population, respectively, and is a real number in which denotes the probability of reverse learning.(3)Choose fittest individual as initial population from the union of random population and opposition population according to the value of fitness.

During the evolution process, we still adopt opposition learning method to increase the opportunity of finding the optimal solution. When a new individual is generated or joined, its opposition value is considered; if the fitness of opposite solution is better than the new individual, the opposite solution is adopted; otherwise, the new individual is obtained.

However, opposition learning method could not be suitable for all kinds of optimization problems. For instance, when calculating the opposition candidate, the solution may jump away from the solution space. If this happens, the solution will be invalid; to avoid this case, the transformed candidate is assigned to a random value as follows: where rand is a random number between and .

The opposition learning based AMO algorithm is described in Algorithm 2.

(1)  Begin
(2)  Set the generation counter , and randomly initialize with a population of NP animal .
(3)  Evaluate the fitness for each individual .
(4)  For to NP do
(5)   For to D do
     
(6)   End for
(7)  Calculate the fitness value of
(8)  End for
(9)  Select NP fittest individual from as an initial population;
(10) While stopping criteria is not satisfied do
(11)  For to NP do
(12)  For to D do
(13)   
(14)  End for
(15)  If rand < Pa then
(16) For to D do
(17)
(18) End for
(19)  End for
(20) Select NP fittest particles from as current population;
(21) For to NP
(22)  For to D
(23)       Select randomly
(24)   If rand > Pa then
(25)   
(26)   End if
(27)  End for
(28) For to NP do
(29)  Evaluate the offspring
(30)   If is better than then
(31)       
(32)   End If
(33) End for
(34) Memorize the best solution achieved so far
(35) End while
(36) End

5. Experimental Results

To evaluate the performance of our algorithm, we applied it to 23 standards benchmark functions as shown in Table 1. These functions have been widely used in the literature.


Test functionD RangeOptimum

30[−100, 100]0
30[−10, 10]0
30[−100, 100]0
30[−100, 100]0
30[−30, 30]0
30[−100, 100]0
30[−1.28, 1.28]0
30[−500, 500]−418.9829 * n
30[−5.12, 5.12]0
30[−32, 32]0
30[−600, 600]0

30[−50, 50]0
30[−50, 50]0
2[−65.53, 65.53]0.998004
4[−5, 5]0.0003075
2[−5, 5]−1.0316285
2[−5, 10] * [ , 0.398
2[−5, 5]3
3 0, 1 −3.86
6 0, 1 −3.32
4 0, 10 −10.1532
4 0, 10 −10.4029
4 0, 10 −10.5364

The maximum numbers of generations are 1500 for , , , , and , 2000 for and , 3000 for , , and , and 5000 for , and, 400 for , 100 for , , , , , , and , 30 for , and 200 for . The population size is 50 because this algorithm has two phases. The results of the algorithm on the 23 test problems are presented in Tables 2, 3, and 4.


FunctionAlgorithmPSODEFAGSAABCAMOOP-AMO

Mean
Rank6574321
Mean
Rank4576321
Mean2.9847 0.01820.1126
Rank6245831
Mean7.99970.22160.0554 18.5227 0.6132
Rank6431725
Mean46.92020.265738.124820.08190.04414.1817
Rank7365241
Mean 0.0017 00
Rank6574311
Mean0.01350.00420.00820.00390.03240.00170.0004
Rank6453721
Average rank5.8645.5744.712.281.57
Overall rank7363521


FunctionAlgorithmPSODEFAGSAABCAMOOP-AMO

Mean
Rank5467321
Mean18.2675134.678923.52137.2831000
Rank5864111
Mean 0.0094 1.7183
Rank5463217
Mean0.016800.00250.01265001
Rank6145117
Mean0.0083
Rank7564321
Mean
Rank6571432
Average rank5.674.55.8342.331.673.17
Overall rank6453213


FunctionAlgorithmPSODEFAGSAABCAMOOP-AMO

Mean0.99800.99803.02735.95330.99800.99800.9980
Rank3267144
Mean 0.00100.0048
Rank4367521
Mean−1.0316−1.0316−1.0314−1.03163−1.0316−1.0316−1.0316
Rank4371255
Mean0.39790.39790.39790.39790.39790.39790.3979
Rank5161711
Mean3.00013.00003.01233.74033.00003.00182.9999
Rank4257261
Mean−3.8628−3.8628−3.8613−3.8625−3.8628−3.8628−3.8629
Rank4387421
Mean−3.2554−3.2174−3.2741−3.3220−3.3220−3.3220−3.3223
Rank6751234
Mean−7.6393−10.1532−6.5633−4.784−10.1528−10.0592−10.1532
Rank5167341
Mean−7.3602−10.4029−10.4027−6.5797−10.4012−10.3899−10.4029
Rank6137461
Mean−8.9611−10.5364−10.2297−8.2651−10.5339−10.4990−10.5364
Rank5167341
Average rank4.62.45.85.23.33.72
Overall rank5276341

As seen from Table 2 to Table 4, opposition-based AMO algorithm significantly improves the results on functions , , , , , and and achieves the same performance with AMO on most other functions, and it is better than PSO, DE, ABC, and FA. This denotes that the algorithm can accelerate convergence rate and find better solutions for some optimization problems, and it did not lead to the premature of algorithm.

Figure 1 shows the evolutionary process of test function by AMO and opposition learning based AMO. The horizontal ordinate denotes the evolution iterations, and the vertical ordinate is the optimal value of objective function. We can see from the graph that the convergence rate of opposition-based AMO is obviously accelerated and the accuracy of optimization is also enhanced.

6. Conclusions

A novel opposition-based AMO algorithm is proposed in this paper. This approach can provide more chance to find better solutions by transforming candidate solutions from current search space to a new search space. Experimental results show that, compared with previous AMO, the proposed algorithm is efficient in most of the test functions. However, we can also see from the experimental results that this algorithm is not suitable for all kinds of problems; for some optimization problems, algorithm has no significant improvement. How to improve the algorithm to adapt to a more optimization problem is worth of further study.

Conflict of Interests

The authors declare that they do not have any commercial or associative interest that represents a conflict of interests in connection with the work submitted.

References

  1. M. Melanie, An Introduction to Genetic Algorithms, MIT Press, Cambridge, Mass, USA, 1999.
  2. S. N. Sivanandam and S. N. Deepa, Introduction to Genetic Algorithms, Springer, Berlin, 2008. View at: MathSciNet
  3. J. Kennedy and R. Eberhart, “Particle swarm optimization,” in Proceedings of the IEEE International Conference on Neural Networks, vol. 14, pp. 1942–1948, December 1995. View at: Google Scholar
  4. A. P. Engelbrecht, Fundamentals of Computational Swarm Intelligence, John Wiley & Sons, Hoboken, NJ, USA, 2005. View at: Zentralblatt MATH
  5. D. Karaboga and B. Basturk, “A powerful and efficient algorithm for numerical function optimization: artificial bee colony (ABC) algorithm,” Journal of Global Optimization, vol. 39, no. 3, pp. 459–471, 2007. View at: Publisher Site | Google Scholar | Zentralblatt MATH | MathSciNet
  6. D. Karaboga and B. Basturk, “On the performance of artificial bee colony (ABC) algorithm,” Applied Soft Computing Journal, vol. 8, no. 1, pp. 687–697, 2008. View at: Publisher Site | Google Scholar
  7. M. Dorigo, Optimization, learning and natural algorithms [Ph.D. dissertation], Politecnico di Milano, Milano, Italy, 1992.
  8. M.-H. Lin, J.-F. Tsai, and L.-Y. Lee, “Ant colony optimization for social utility maximization in a multiuser communication system,” Mathematical Problems in Engineering, vol. 2013, Article ID 798631, 8 pages, 2013. View at: Publisher Site | Google Scholar
  9. X. Li, J. Zhang, and M. Yin, “Animal migration optimization: an optimization algorithm inspired by animal migration behavior,” Neural Computing and Applications, 2013. View at: Publisher Site | Google Scholar
  10. H. R. Tizhoosh, “Opposition-based learning: a new scheme for machine intelligence,” in International Conference on Computational Intelligence for Modelling, Control and Automation (CIMCA '05), pp. 695–701, November 2005. View at: Google Scholar
  11. S. Rahnamayan, H. R. Tizhoosh, and M. M. A. Salama, “Opposition versus randomness in soft computing techniques,” Applied Soft Computing Journal, vol. 8, no. 2, pp. 906–918, 2008. View at: Publisher Site | Google Scholar
  12. Z. Lin and L. Wang, “A new opposition-based compact genetic algorithm with fluctuation,” Journal of Computational Information Systems, vol. 6, no. 3, pp. 897–904, 2010. View at: Google Scholar
  13. H. Lin and H. Xingshi, “A novel opposition-based particle swarm optimization for noisy problems,” in Proceedings of the 3rd International Conference on Natural Computation (ICNC '07), pp. 624–629, Haikou, China, August 2007. View at: Publisher Site | Google Scholar
  14. H. Wang, H. Li, Y. Liu, C. Li, and S. Zeng, “Opposition-based particle swarm algorithm with Cauchy mutation,” in Proceedings of the 2007 IEEE Congress on Evolutionary Computation (CEC '07), pp. 4750–4756, Singapore, September 2007. View at: Publisher Site | Google Scholar
  15. H. Wang, Z. Wu, S. Rahnamayan, Y. Liu, and M. Ventresca, “Enhancing particle swarm optimization using generalized opposition-based learning,” Information Sciences, vol. 181, no. 20, pp. 4699–4714, 2011. View at: Publisher Site | Google Scholar | MathSciNet
  16. H. Wang, Z. Wu, S. Rahnamayan, and J. Wang, “Diversity analysis of opposition-based differential evolution-an experimental study,” in Proceedings of the International Symposium on Intelligence Computation and Applications, pp. 95–102, 2010. View at: Google Scholar
  17. S. Rahnamayan, H. R. Tizhoosh, and M. M. A. Salama, “Opposition-based differential evolution algorithms,” in Proceedings of the IEEE Congress on Evolutionary Computation (CEC '06), pp. 2010–2017, Vancouver, Canada, July 2006. View at: Google Scholar
  18. R. S. Rahnamayan, H. R. Tizhoosh, and M. M. A. Salama, “Opposition-based differential evolution,” IEEE Transactions on Evolutionary Computation, vol. 12, no. 1, pp. 64–79, 2008. View at: Publisher Site | Google Scholar
  19. M. Ventresca and H. R. Tizhoosh, “Improving the convergence of backpropagation by opposite transfer functions,” in Proceedings of the International Joint Conference on Neural Networks (IJCNN '06), pp. 4777–4784, Vancouver, Canada, July 2006. View at: Google Scholar
  20. M. Ventresca and H. R. Tizhoosh, “Opposite transfer functions and backpropagation through time,” in Proceedings of the IEEE Symposium on Foundations of Computational Intelligence (FOCI '07), pp. 570–577, Honolulu, Hawaii, USA, April 2007. View at: Publisher Site | Google Scholar
  21. A. R. Malisia and H. R. Tizhoosh, “Applying opposition-based ideas to the Ant Colony System,” in Proceedings of the IEEE Swarm Intelligence Symposium (SIS '07), pp. 182–189, Honolulu, Hawaii, USA, April 2007. View at: Publisher Site | Google Scholar
  22. W.-f. Gao, S.-y. Liu, and L.-l. Huang, “Particle swarm optimization with chaotic opposition-based population initialization and stochastic search technique,” Communications in Nonlinear Science and Numerical Simulation, vol. 17, no. 11, pp. 4316–4327, 2012. View at: Publisher Site | Google Scholar | Zentralblatt MATH | MathSciNet

Copyright © 2013 Yi Cao et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

748 Views | 582 Downloads | 3 Citations
 PDF  Download Citation  Citation
 Download other formatsMore
 Order printed copiesOrder