Abstract

AMO is a simple and efficient optimization algorithm which is inspired by animal migration behavior. However, as most optimization algorithms, it suffers from premature convergence and often falls into local optima. This paper presents an opposition-based AMO algorithm. It employs opposition-based learning for population initialization and evolution to enlarge the search space, accelerate convergence rate, and improve search ability. A set of well-known benchmark functions is employed for experimental verification, and the results show clearly that opposition-based learning can improve the performance of AMO.

1. Introduction

Many real world problems can be summarized as optimization problems. Optimization problems play an important role in both industrial application and scientific research. In the past decades, different optimization algorithms have been proposed. Among them, genetic algorithm may be the first and popular algorithm inspired by natural genetic variation and natural selection [1, 2]. Inspired by the social behavior of bird flocking or fish school, particle swarm algorithm was developed by Kennedy and Eberhart in 1995 [3, 4]. Artificial bee colony algorithm was proposed by Karaboga and Basturk in 2005, which simulates the foraging behavior of bee swarm [5, 6]. Ant colony optimization (ACO) which simulates the action of ants was first introduced by Dorigo [7, 8]. Animal Migration Optimization (AMO) as a new optimization algorithm inspired by animal migration behavior was first proposed by Li et al. in [9]. AMO simulates the widespread migration phenomenon in the animal kingdom, through the change of position, replacement of individual, and finding the optimal solution gradually. AMO has obtained good experimental results on many optimization problems.

Optimization algorithms often begin from an initial set of variables which are generated randomly and through iteration to obtain the global optimal solutions or the maximum of the objective function. It is known that the performance of these algorithms is highly related to diversity of particles; it may easily fall into local optima and has slow convergence rate and poor accuracy in the later stage of evolution. In recent years, many efforts have been done to improve the performance of different algorithms.

The concept of opposition-based learning (OBL) was first introduced by Tizhoosh [10]. The main idea is to consider current candidate solution and its opposite candidate in order to enlarge the search scope, and it also uses elite selection mechanism to speed up the convergence speed and find the optimal solution. It has been proved in [11], an opposite learning mechanism has more chance to be closer to the global optimum solution than a random candidate solution. The opposition-based learning idea has been successfully applied on GA [11] that PSO [1216], DE [17, 18], artificial neural networks (ANN) [19, 20], and ant colony optimization (ACO) [21]; experimental results show that opposition-based learning can improve the search capabilities of the algorithm to some extent.

This paper presents an algorithm to improve the performance of AMO. In the absence of priori information about the population, we employ opposition-based learning during population initialization and population evolution. Through the introduction of opposition-based learning mechanism, it can transform solutions from current search space to a new search space to enlarge the search space. By means of selecting the better solution between current solution and the opposite solution, it will improve search ability and accelerate convergence rate, and it has more chance to find the global optima.

The rest of this paper is organized as follows. Section 2 briefly introduces the animal migration optimization algorithm. Section 3 gives a simple description of opposition-based learning. Section 4 explains an implementation of the proposed algorithm, opposition-based AMO algorithm. Section 5 presents a comparative study among AMO, OPAMO, and other optimization algorithms on 23 benchmark problems. Finally, the work is concluded in Section 6.

2. Animal Migration Optimization

AMO is a new heuristic optimization algorithm inspired by the behavior of animal migration which is a ubiquitous phenomenon that can be found in all major animal groups, such as birds, mammals, fish, reptiles, amphibians, insects, and crustaceans [9].

In this algorithm, there are mainly two processes. In the first process the algorithm simulates how the groups of animals move from current position to a new position. During this process, each individual should obey three main rules: move in the same direction as its neighbors; remain close to its neighbors; avoid collisions with its neighbors. We select one neighbor randomly and update the position of the individual according to this neighbor, as can be seen in the following formula: where is the current position of the neighborhood, is produced using a random number generator controlled by a Gaussian distribution, is the current position of th individual, and is the new position of th individual.

In the following process, the algorithm simulates how new animals are introduced to the group during the migration. During the population updating process, some animals will leave the group and some new animals will join the new population. We assume that the number of available animals is fixed. The animals will be replaced by some new individuals with a probability Pa. The probability is used according to the quality of the fitness. For the best fitness, the probability Pa is 1/NP. For the worst fitness, the probability is 1. This process can be shown in Algorithm 1, where are randomly chosen integers and. After producing a new solution, it will be evaluated and compared with . If the objective fitness of is smaller than the fitness of , is accepted as a new basic solution; otherwise, would be obtained.

For to NP
For to D
  If rand > Pa
End if
End for
End for

To verify the performance of AMO, 23 benchmark functions chosen from the literature are employed. The results show that the proposed algorithm clearly outperforms some evolution algorithms from the literature.

3. Opposition-Based Learning

In the evolutionary algorithm, algorithm often starts from random initial population until a satisfactory solution is found. If no population prior information is known, the speed of evolution is relation to the distance between the initial particles and the best particle. If we select some initial particles close to the best individual, we can accelerate the convergence of the algorithm to some extent.

The opposition learning algorithm is a new type of reinforcement learning algorithm, and it has been proven to be an effective concept to enhance various optimization approaches [22]. This algorithm utilizes opposition learning mechanism to generate opposite population and employs elite selection to choose the individual closer to the best individual as the member of initial population, thus facilitating the overall evolutionary convergence speed.

For ease of description, we give the definition of the opposition point first.

Definition 1. Let be a real number; the opposite point is defined as
Similarly, the definition can be extended to high dimensional space.

Definition 2. Let as a point in -dimension space, where, for all ; the point is defined as the opposition value of , in which
Through the two previous definitions, we can conclude that the opposite learning algorithm is defined as follows.

Definition 3. is a candidate solution in -dimension space; assume is the fitness function which is used to evaluate the fitness of the candidate. According to the definition of the opposite point, is the opposite of . If is better than , then choose as candidate instead of ; otherwise, is obtained, through evaluating the fitness of and the opposite to get the better individual.

4. Enhanced AMO Using Opposition-Based Learning

4.1. Basic Concept

According to the probability principle, each randomly generated candidate solution has fifty percent probability chance away from the optimal solution compared to its opposition solution; AMO algorithm usually generates candidate solutions randomly in the search space as initial population because of the lack of prior information. During the optimization process, the calculation time of finding optimal solution will be changed according to the distance between the candidate solution and the optimal solution. By applying opposition-based learning, we not only evaluate the current candidate but also calculate its opposition candidate ; this will provide more chance of finding the solution closer to the global optimal value.

Let be a solution in current search space, ; the new solution in the opposite space is

From the above definition, we can infer. Obviously, through the opposite transformation, the center of search space is changed from to .

Let , where is a real number in , and it is often generated randomly; this denotes the probability of reverse learning.

4.2. Opposition-Based AMO Algorithm

In this paper, we introduce a novel algorithm which combines opposition learning algorithm with AMO algorithm. Through the introduction of opposition learning mechanism, we consider the candidate solution in both current search space and its opposite space simultaneously. By selecting the better individuals in the two search spaces, it can provide more chance to find the global optimal solution and largely speed up the convergence.

During the process of population initialization and evolution, through the introduction of opposition learning mechanism, more candidate solutions will be considered, and we choose the most likely candidate solutions for evolution.

We described the opposition-based population initial process in detail.(1)Initialize the population in the search space randomly.(2)Calculate opposition population according to initial population; each dimension is calculated as follows: where and denote the th variable of the th vector of the population and opposite population, respectively, and is a real number in which denotes the probability of reverse learning.(3)Choose fittest individual as initial population from the union of random population and opposition population according to the value of fitness.

During the evolution process, we still adopt opposition learning method to increase the opportunity of finding the optimal solution. When a new individual is generated or joined, its opposition value is considered; if the fitness of opposite solution is better than the new individual, the opposite solution is adopted; otherwise, the new individual is obtained.

However, opposition learning method could not be suitable for all kinds of optimization problems. For instance, when calculating the opposition candidate, the solution may jump away from the solution space. If this happens, the solution will be invalid; to avoid this case, the transformed candidate is assigned to a random value as follows: where rand is a random number between and .

The opposition learning based AMO algorithm is described in Algorithm 2.

(1)  Begin
(2)  Set the generation counter , and randomly initialize with a population of NP animal .
(3)  Evaluate the fitness for each individual .
(4)  For to NP do
(5)   For to D do
     
(6)   End for
(7)  Calculate the fitness value of
(8)  End for
(9)  Select NP fittest individual from as an initial population;
(10) While stopping criteria is not satisfied do
(11)  For to NP do
(12)  For to D do
(13)   
(14)  End for
(15)  If rand < Pa then
(16) For to D do
(17)
(18) End for
(19)  End for
(20) Select NP fittest particles from as current population;
(21) For to NP
(22)  For to D
(23)       Select randomly
(24)   If rand > Pa then
(25)   
(26)   End if
(27)  End for
(28) For to NP do
(29)  Evaluate the offspring
(30)   If is better than then
(31)       
(32)   End If
(33) End for
(34) Memorize the best solution achieved so far
(35) End while
(36) End

5. Experimental Results

To evaluate the performance of our algorithm, we applied it to 23 standards benchmark functions as shown in Table 1. These functions have been widely used in the literature.

The maximum numbers of generations are 1500 for , , , , and , 2000 for and , 3000 for , , and , and 5000 for , and, 400 for , 100 for , , , , , , and , 30 for , and 200 for . The population size is 50 because this algorithm has two phases. The results of the algorithm on the 23 test problems are presented in Tables 2, 3, and 4.

As seen from Table 2 to Table 4, opposition-based AMO algorithm significantly improves the results on functions , , , , , and and achieves the same performance with AMO on most other functions, and it is better than PSO, DE, ABC, and FA. This denotes that the algorithm can accelerate convergence rate and find better solutions for some optimization problems, and it did not lead to the premature of algorithm.

Figure 1 shows the evolutionary process of test function by AMO and opposition learning based AMO. The horizontal ordinate denotes the evolution iterations, and the vertical ordinate is the optimal value of objective function. We can see from the graph that the convergence rate of opposition-based AMO is obviously accelerated and the accuracy of optimization is also enhanced.

6. Conclusions

A novel opposition-based AMO algorithm is proposed in this paper. This approach can provide more chance to find better solutions by transforming candidate solutions from current search space to a new search space. Experimental results show that, compared with previous AMO, the proposed algorithm is efficient in most of the test functions. However, we can also see from the experimental results that this algorithm is not suitable for all kinds of problems; for some optimization problems, algorithm has no significant improvement. How to improve the algorithm to adapt to a more optimization problem is worth of further study.

Conflict of Interests

The authors declare that they do not have any commercial or associative interest that represents a conflict of interests in connection with the work submitted.