Abstract

Bacterial Foraging Algorithm (BFO) is a recently proposed swarm intelligence algorithm inspired by the foraging and chemotactic phenomenon of bacteria. However, its optimization ability is not so good compared with other classic algorithms as it has several shortages. This paper presents an improved BFO Algorithm. In the new algorithm, a lifecycle model of bacteria is founded. The bacteria could split, die, or migrate dynamically in the foraging processes, and population size varies as the algorithm runs. Social learning is also introduced so that the bacteria will tumble towards better directions in the chemotactic steps. Besides, adaptive step lengths are employed in chemotaxis. The new algorithm is named BFOLS and it is tested on a set of benchmark functions with dimensions of 2 and 20. Canonical BFO, PSO, and GA algorithms are employed for comparison. Experiment results and statistic analysis show that the BFOLS algorithm offers significant improvements than original BFO algorithm. Particulary with dimension of 20, it has the best performance among the four algorithms.

1. Introduction

Swarm intelligence is an innovative optimization technique inspired by the social behaviors of animal swarms in nature. Though the individuals have only simple behaviors and are without centralized control, complex collectiveintelligence could emerge on the level of swarm by their interaction and cooperation. Recent years, several swarm intelligence algorithms have been proposed, such as Ant Colony Optimization (ACO) [1], Particle Swarm Optimization (PSO) [2], Artificial Bee Colony (ABC) [3], and Bacterial Foraging Optimization (BFO) BFO algorithm is first proposed by Passino [4] in 2002. It is inspired by the foraging and chemotactic behaviors of bacteria, especially the Escherichia coli (E. coli). By smooth running and tumbling, The E. coli can move to the nutrient area and escape from poison area in the environment. The chemotactic is the most attractive behavior of bacteria. It has been studied by many researchers [5, 6]. By simulating the problem as the foraging environment, BFO algorithm and its variants are used for many numerical optimization [7, 8] or engineering optimization problems, such as distributed optimization [9], job shop scheduling [10], image processing [11], and stock market prediction [12].

However, the original BFO has some shortages: dispersal, reproduction, and elimination each happens; after a certain amount of chemotaxis operations. The appropriate time and method for dispersal and reproduction are important. Otherwise, the stability of the population may be destroyed. The tumble angles in the chemotactic phase are generated randomly. As a result, the algorithm is more like a random searching algorithm except it will try further in better directions. The bacteria swarm lacks interaction between individuals. Good information carried by those individuals in higher nutritional areas cannot be shared with and utilized by other bacteria. The swim step length in the original BFO algorithm is a constant. In most cases, the bacterium will run one more step if the position is better than its last position. If the swim step is large at the end stage (e.g., larger than the distance between its current position and the optimal point), it will skip the optimal point repeatedly. This will make the bacteria hard to converge to the optimal point.

In this paper, several adaptive strategies are used to improve the original BFO algorithm. First, a lifecycle model of bacteria is proposed. Bacteria will split or die depending on the nutrition obtained in their foraging processes. Then, social leaning is introduced to enhance the information sharing between bacteria. The tumble angles are no longer generated randomly but directed by the swarm’s memory. Last, adaptive search strategy is employed, which makes the bacteria could use different search step lengths in different situations.

The rest of the paper is organized as follows. In Section 2, we will introduce the original BFO algorithm. Its features and pseudocode are given. In Section 3, the adaptive strategies of BFOLS algorithm is described in detail. In Section 4, the BFOLS algorithm is tested on a set of benchmark functions compared with several other algorithms. Results are presented and discussed. The test of its new parameters setting and the simulation of its varying population size are also done in this section. Finally, conclusions are drawn in Section 5.

2. Original Bacterial Foraging Optimization

The E. coli bacteria is one of the earliest bacteria which has been researched. It has a plasma membrane, cell wall, and capsule that contains the cytoplasm and nucleoid. Besides, it has several flagella which are randomly distributed around its cell wall. The flagella rotate in the same direction at about 100–200 revolutions per second [13]. If the flagella rotate clockwise, they will pull on the cell to make a “tumble.” And if they rotate counterclockwise, their effects accumulate by forming a bundle which makes the bacterium “run” in one direction [4], as shown in Figure 1.

The bacteria can sense the nutrient concentration in the environment. By tumbling and running, the bacteria will search for nutrient area and keep away from the poisonous area. Simulating the foraging process of bacteria, Passino proposed the Bacterial Foraging Optimization (BFO) algorithm. The main mechanisms of BFO are illustrated as follows.

2.1. Chemotaxis

Chemotaxis is the main motivation of the bacteria’s foraging process [14]. It consists of a tumble with several runs. In BFO, the position updating which simulates the chemotaxis procedure is used in (2.1) as follows. presents the position of the th bacterium in the th chemotaxis step. is the step length during the th chemotaxis. is a unit vector which stands for the swimming direction after a tumble. It can be generated by (2.2), where is a randomly produced vector with the same dimension of the problem:

In each chemotactic step, the bacterium generated a tumble direction firstly. Then the bacterium moves in the direction using (2.1). If the nutrient concentration in the new position is higher than the last position, it will run one more step in the same direction. This procedure continues until the nutrient get worse or the maximum run step is reached. The maximum run step is controlled by a parameter called .

2.2. Reproduction

For every times of chemotactic steps, a reproduction step is taken in the bacteria population. The bacteria are sorted in descending order by their nutrient obtained in the previous chemotactic processes. Bacteria in the first half of the population are regarded as having obtained sufficient nutrients so that they will reproduce. Each of them splits into two (duplicate one copy in the same location). Bacteria in the residual half of the population die and they are removed out from the population. The population size remains the same after this procedure. Reproduction is the simulation of the natural reproduction phenomenon. By this operator, individuals with higher nutrient are survived and duplicated, which guarantees that the potential optimal areas are searched more carefully.

2.3. Eliminate and Dispersal

In nature, the changes of environment where population lives may affect the behaviors of the population. For example, the sudden change of temperature or nutrient concentration, the flow of water, all these may cause bacteria in the population to die or move to another place [15]. To simulate this phenomenon, eliminate-dispersal is added in the BFO algorithm. After every times of reproduction steps, an eliminate-dispersal event happens. For each bacterium, a random number is generated between 0 and 1. If the random number is less than a predetermined parameter, known as , the bacterium will be eliminated and a new bacterium is generated in the environment. The operator can be also regarded as moving the bacterium to a randomly produced position. The eliminate-dispersal events may destroy the chemotactic progress. But they may also promote the solutions since dispersal might place the bacteria in better positions. Overall, contrary to the reproduction, this operator enhances the diversity of the algorithm.

In BFO algorithm, the eliminate-dispersal events happen for times. That is to say, there are three loops for the bacteria population in BFO algorithm. The outer loop is eliminate-dispersal event, the middle loop is reproduction event and the inner loop is chemotactic event. The algorithm ends after all the three loops are finished. The pseudocode of original BFO algorithm is given in Pseudocode 1.

3. Bacterial Foraging Optimization with Lifecycle and Social Learning

To improve the optimization ability of BFO algorithms, many variations are proposed. In the proposed BFOLS algorithm, three strategies are used to improve the original BFO.

3.1. Lifecycle Model of Bacterium

As mentioned above, in original BFO algorithm, there are three loops for the population. Bacteria will reproduce after times of chemotactic steps and dispersal after times of reproduction. As a result, the parameter settings of and are important to the performance of the algorithm. Unsuitable parameter values may destroy the chemotactic searching progress [16]. To avoid this, we remove the three loops In BFOLS algorithm. Instead, for each bacterium, we will decide it to reproduce, die, or migrate by certain conditions in the bacteria’s cycle.

The idea of lifecycle has been used in some swarm intelligence algorithms [17, 18]. Based on Niu’s model in [18], a new lifecycle model of bacteria is founded in this paper. In the model, a bacterium could be represented by a six-tuple as follows: where represent the position, fitness, nutrient, state, tumble direction, and step length, respectively. It should be noted that fitness is the evaluation to the current position of a bacterium, and nutrient is total nutrient gained by the bacterium in its whole searching process.

We define the nutrient updating formula as (3.2). represents the fitness of the bacterium’s last position (for a minimum problem, fitness is larger when the function value is smaller). In initialization stage, nutrients of all bacteria are zero. In the bacterium’s chemotactic processes, if the new position is better than the last one, it is regarded that the bacterium will gain nutrient from the environment and the nutrient is added by one. Otherwise, it loses nutrient in the searching process and its nutrient is reduced by one.

There are five states defined in the lifecycle model: born, forage, split, die, and migrate. That is, . The bacteria are born when they are initialized. Then they will forage for nutrient. In the foraging process, if a bacterium obtains sufficient nutrient, it will split into two in the same position; if the bacterium enters bad area and loses nutrient to a certain threshold, it will die and be eliminated from the population; if the bacterium is with a bad nutrient value but has not died yet, it may migrate to a random position according to certain probability. After split or migrate, the bacterium is regarded as new born and its nutrient will be reset to 0. The state transition diagram is shown in Figure 2.

The split criterion and dead criterion are listed in Formula (3.3) and (3.4). and are two new parameters used to control and adjust the split criterion and dead criterion. is the initial population size and is the current population size. It should be noticed that the population size will increase by one if a bacterium splits and reduce by one if a bacterium dies. As a result, the population size may vary in the searching process. At the beginning of the algorithm, as equals to , the bacterium will split when its nutrient is larger than and die when its nutrient is smaller than 0. We do not need to worry that the population will decrease suddenly at the first beginning because when it is first time for the bacteria to decide whether to die or not, they have passed through the chemotactic process so most of the bacteria’s nutrient values are larger than zero. With the algorithm runs, the population size may change and differ from the initial population size. On certain conditions, the population size will reduce to zero, which makes the algorithm unable to continue. Oppositely, if the population size becomes too large, it will cost too much computation and be hard to evolve. To avoid the population size becoming too large or too small, a self-adaptive strategy is introduced: if is larger than , for each of their differences, the split threshold value will increase by one. And if is larger than , for each of their differences, the death threshold value will decrease by one. The strategy is also in accord with nature. Behaviors of organisms will be affected by the environment their lived. If the population is too crowded, the competition between the individuals will increase and death becomes common. If the population is small, the individuals are easier to survive and reproduce. By this strategy, the split threshold is enhanced when population size is large and the dead condition is stricter when population is small, which controls the population size in a relatively stable range:

When the nutrient of a bacterium is less than zero, but it has not died yet, it could migrate with a probability. A random number is generated and if the number is less than migration probability , it will migrate and move to a randomly produced position. Nutrient of the bacterium will be reset to zero.

It should be mentioned that the splitting, death, and migration operators are judged in sequence, if one of them is done, the algorithm will breakout from the current cycle and will not execute the rest of judgments.

3.2. Social Learning

Social learning is the core motivation in the formation of the collective knowledge of swarm intelligence [19]. For example, in PSO algorithm, particles learn from the best particles and themselves [20]. In ABC algorithm, bees learn from their neighbors [21]. However, the social learning is seldom used in original BFO algorithm. In chemotactic steps of original BFO, the tumble directions are generated randomly. Information carried by the bacteria in nutrient rich positions is not utilized. In our BFOLS, we assume that all bacteria can memory the best position they have reached and share the information to other bacteria. And in chemotactic steps, a bacterium will decide which direction to tumble using the information of its personal best position and the population’s global best position. Based on the assumption, the tumble directions in our BFOLS are generated using (3.5). Where is the global best of the population found so far and is the th bacterium’s personal historical best. The tumble direction is then normalized as unit vector by (2.2) and the position updating is still using the (2.1):

The direction generating equation is similar to the velocity updating equation of PSO algorithm [22]. They all used the global best and personal best. However, they are not the same. First, there is no inertia term in (3.5). Usually, the bacteria will run more than one time in chemotactic steps. An inertia term will enlarge the difference of between and the current position tremendously. Second, there are no learning factors in (3.5). Because the direction that will be normalized to unit vector and the learning factors is meaningless.

By social learning, the bacteria will move to better areas with higher probability as good information is fully utilized.

3.3. Adaptive Search Strategy

As mentioned above, the constant step length will make the population hard to converge to the optimal point. In an intelligence optimization algorithm, it is important to balance its exploration ability and exploitation ability. The exploration ability ensures that the algorithm can search the whole space and escape from local optima. The exploitation ability guarantees that the algorithm can search local areas carefully and converge to the optimal point. Generally, in the early stage of an algorithm, we should enhance the exploration ability to search all the areas. In the later stage of the algorithm, we should enhance the exploitation ability to search the good areas intensively.

There are various step length varying strategies [23, 24]. In BFOLS, we use the decreasing step length. The step length will decrease with the fitness evaluations, as shown in (3.6). is the step length at the beginning. is the step length at the end. is the current fitness evaluations count. is the total fitness evaluations. In the early stage of BFOLS algorithm, larger step length provides the exploration ability. And at the later stage, small step length is used to make the algorithm turn to exploitation:

To strengthen the idea further, an elaborate adaptive search strategy is introduced based on the decreasing step length mentioned above. In the new strategy, the bacteria’s step lengths may vary from each other. And their values are related with their nutrient, which are calculated using (3.7):

With a higher nutrient value, the bacterium’s step length is shortened further. This is also in accordance with the food searching behaviors in natural. The higher nutrient value indicates that the bacterium is located in potential nutrient rich area with a larger probability. As a result, it is necessary to exploit the area carefully with smaller step length.

The pseudocode of BFOLS algorithm is listed in Pseudocode 2.

4. Experiments

In this section, first we will test the optimization ability of BFOLS algorithm on a set of benchmark functions. Several other intelligent algorithms will be employed for comparison, including original BFO, PSO, and Genetic Algorithm (GA) [25]. Statistical techniques are also used [26, 27]. Iman-Davenport test and Holm method are employed to analyze the differences among these algorithms. As two extra control parameters and are introduced, the settings of the two parameters are then tested to determine their best values. At last, the varying tendency of population size in BFOLS is tested and analyzed.

4.1. Performance Test of BFOLS
4.1.1. Benchmark Functions

The BFOLS algorithm was tested on a set of benchmark functions with dimensions of 2 and 20, respectively. The functions are listed in Table 1. Among them,  –  are basic functions widely adopted by other researchers [28, 29], their global minima are all zero.  –  are shifted and rotated functions selected from CEC2005 test-bed, global minima of these functions are different from each other. For each function, its standard variable range is used. Function is a special case. It has no bounds. The initialization range of this function is , and the global optima is outside of its initialization range.

It should be mentioned that the bacteria may run different times in a chemotactic step. As a result, different computational complexity may be taken in each iteration for different algorithms and iterations count is no longer a reasonable measure. In order to compare the different algorithms, a fair measure method must be selected. In this paper, we use number of function evaluations (FEs) as a measure criterion, which is also used in many other works [3032]. All algorithms were terminated after 20,000 function evaluations on dimension of 2 and 60,000 function evaluations on dimension of 20.

4.1.2. Parameter Settings for the Involved Algorithms

The population sizes of all algorithms were 50. In original BFO algorithm, the parameters are set as follows: , . The parameter settings are similar to that in Passino’s work except is smaller and is larger [4]. This is because the termination criterion has changed to be the function evaluations. Smaller is selected to guarantee that the algorithm can run through the chemotactic, reproduction, eliminate, and dispersal processes. Larger is selected to guarantee that the BFO algorithm will not terminate before the maximum function evaluations. In our BFOLS algorithm, as it has mentioned previously, , , and are no longer needed. , which are the same with those are in BFO. The started step ; ended step , where and refer to the lower bound and upper bound of the variables of the problems. This will make the algorithm suitable for problems of different scales. The step of the whole population decreases from to linearly, and step of each bacterium is calculated by (3.7) mentioned above. The values of the two control parameters and are set to be 30 and 5. Standard PSO and GA algorithm was used in this experiment. In PSO algorithm, inertia weight decreased from 0.9 to 0.7. The learning factors [33]. . In GA algorithm, single-point crossover is used; crossover probability is 0.95 and mutation probability is 0.1 [25].

4.1.3. Experiment Results and Statistical Analysis

The error values of BFOLS, BFO, PSO, and GA algorithms on the benchmark functions with dimension of 2 and 20 are listed in Tables 2 and 3, respectively. Mean and standard deviation values obtained by these algorithms for 30 times of independent runs are given. Best values of them on each function are marked as bold. As there are 14 functions on each dimension, it will take too much space to give all the convergence plots. Here, only ’s convergence plots are given, as seen in Figure 3.

With dimension of 2, PSO obtained best mean error values on 8 functions. BFOLS performed best on 5 and BFO performed best on the rest ones. With dimension of 20, BFOLS algorithm obtained best results on 12 functions of all 14. PSO and GA performed best on the rest two, respectively. The average rankings of the four algorithms are listed in Table 4. The smaller the value is, the better the algorithm performs. On dimension of 2, the performance order is . And on dimension of 20, the performance order is .

Table 5 shows the results of Iman-Davenport statistical test. The critical values are at the level of 0.05, which can be looked up in the F-distribution table with 3 and 39 degrees of freedom. On dimensions of 2 and 20, the Iman-Davenport values are all larger than their critical values, which mean that significant differences exist among the rankings of the algorithms under the two conditions.

Holm tests were done as a post hoc procedure. With dimension of 2, PSO performed best. It is chosen as the control algorithm and the other algorithms will be compared with it. With dimension of 20, BFOLS algorithm is the control algorithm. The results of Holm test with dimensions of 2 and 20 are given in Tables 6 and 7, respectively. The α/i values are with . If , the values are twice of the values listed.

As shown in Table 6, the values of GA and BFO are smaller than their α/i values, which means that equality hypotheses are rejected and significant differences exist between these two algorithms and the control algorithm-PSO. The value of BFOLS is larger than its α/i values, so the equality hypothesis cannot be rejected. It denotes that no significant differences exist and it can be regarded as equivalent to PSO. The situation is the same when .

With dimension of 20, BFOLS algorithm is the control algorithm. The values of BFO and GA are smaller than their α/i values, So BFOLS is significant better than the two algorithms. The equality hypothesis between BFOLS and PSO cannot be rejected when . However, when , α/i value is 0.1 and the value is smaller than it. So BFOLS is also significantly better than PSO under the level of 0.1.

Overall, BFOLS shows significant improvement over the original BFO algorithm. And its optimization ability is better than the classic PSO and GA algorithms on higher dimensional problems, too.

4.2. Parameters Setting Test of BFOLS

In BFOLS, two extra parameters and are introduced. is the initial split threshold. makes the split threshold and the death threshold adjusted with the environment adaptively. To determine the best settings of these two parameters, we tested the algorithm on four benchmarks with different and values.

The four benchmark functions are Rosenbrock, Griewank, Ackley, and Schwefel2.22. Each function was tested with dimensions of 2 and 20. should be a little larger so the bacteria will not reproduce sharply at first. should be larger than zero to let the split and death threshold vary adaptively. Tests have been done that population size will reduce to zero and error occurs on some functions when is zero. In the first group of tests, were fixed to 5 and were set to be 10, 20, 30, 50, and 100 separately. In the second group of tests, were fixed to 30 and were set to be 1, 2, 5, 20, and 60 separately. Results of BFOLS obtained with different parameter values are listed in Tables 8 and 9.

It is clear from the table, on most benchmark functions, results obtained with different parameter values are almost at the same order of magnitude. That is to say, the performance of BFOLS is not that sensitive to the parameter values on most functions. However, on Ackley function with dimension of 20, the situations seem changed. While was fixed, results of BFOLS with values of 10, 20, and 30 are much better than of 50 and 100. While was fixed, results of BFO with values of 5 and 20 are better than 1, 2, and 60 clearly. At the meanwhile, the average ranks show that it got the best rank while Nsplit were 30 or 100 in the first test and while Nadapt was 5 in the second test.

4.3. Population Size-Varying Simulation in BFOLS

As it has mentioned above, the population size of BFOLS may vary because the bacteria will split and die. As a result, the dynamic varying on function  –  is tracked and recorded while the algorithm runs. The mean population size varying plots on the six functions with dimension of 2 and 20 are listed in Figure 4. The left subplot is with dimension of 2 and the right one is with dimension of 20. It can be seen that obvious regularities exist among both the two plots. With dimension of 2, the population size of BFOLS decreased firstly and increased at the end in all functions. With dimension of 20, population size increased fast at the beginning and then reached the peaks. After that, population size began to decrease. The varying plots probably correspond with two adaptive behaviors in the nature environment. As mentioned in (3.2), the nutrient updating is related with the improvement or deterioration of the position of the bacterium. In functions with dimension of two, the room for improvement is limited, which is similar to a saturated environment. The nutrient is limited and competitions between bacteria are fierce. As a result, bacteria die more than split and the population size decreases. At the end of the algorithm, the competition reduced as the number of bacteria decreased, so the bacteria reproduced more often adaptively. On functions with dimension of 20, there is much room for improvement and it could be regarded as an eutrophic environment. Bacteria split easily and the population size increased sharply at first; then it reached saturation. Competition increased and the population size began to decrease.

5. Conclusions

This paper analyzes the shortness of original Bacterial Foraging Optimization algorithm. To overcome its shortness, an adaptive BFO algorithm with lifecycle and social learning is proposed, which is named BFOLS. In the new algorithm, a lifecycle model of bacteria is founded. The bacteria will split, die, or migrate dynamically in the foraging processes according to their nutrient. Social learning is also introduced in the chemotactic steps to guide the tumble direction. In BFOLS algorithm, the tumble angles are no longer generated randomly. Instead, they are produced using the information of the bacteria’s global best, the bacterium’ personal best, and its current position. At last, an adaptive search step length strategy is employed. The step length of the bacteria population decreases linearly with iterations and the individuals adjust their step lengths further according to their nutrient value.

To verify the optimization ability of BFOLS algorithm, it is tested on a set of benchmark functions with dimensions of 2 and 20. Original BFO, PSO, and GA algorithms are used for comparison. With dimension of 2, it outperforms BFO and GA but is little worse than PSO. With dimension of 20, it shows significant better performance than GA and BFO. At the level of , it is significant better than PSO, too. The settings of the new parameters are tested and discussed. and are recommend to be 30 and 5 or in the nearby range. The varying situations of population size are also tracked. With dimensions of 2 and 20, their varying plots show obvious regularities and correspond with the population survival phenomenon in natural. Overall, the proposed BFOLS algorithm is a powerful algorithm for optimization. It offers significant improvements over original BFO and shows competitive performances compared with other algorithms on higher-dimensional problems.

Further research efforts could focus on the following aspects. First, other step length strategies can be used. Second, more benchmark functions with different dimensions could be tested.

1 Initialization
2 For   :
3  For   :
4   For   :
5    For   :
6     
7     Generate a tumble angle for bacterium ;
8     Update the position of bacterium by (2.1);
9     Recalculate the
10    
11    While ( )
12    If  
13       ;
14      Run one more step using (2.1);
15      Recalculate the ;
16       ;
17    Else
18       ;
19    End if
20    End while
21   End for
22   Update the best value achieved so far;
23  End for
24  Sort the population according to ;
25  For   :
26   Bacterium ( ) = Bacterium ( );
27  End For
28 End for
29 For :
30 If ( )
31   Move Bacterium to a random position
32 End if
33 End for
34 End for

1 Initialization
2 While (termination conditions are not met)
3   size of the last population; ;
4  while  
5    ;
6   
7   Generate a tumble angle for bacterium ;
8   Update the position of bacterium by (2.1);
9   Recalculate the
10  Update personal best and global best;
11  
12  While ( )
13  If  
14    
15    Run one more step using (2.1);
16    Recalculate the ;
17    Update personal best and global best;
18     ;
19  Else
20     ;
21  End if
22  End while
23  If (Nutrition ( ) is larger than split threshold value)
24   Split bacterium into two bacteria; Break;
25  End if
26  If (Nutrition ( ) is less than dead threshold value)
27   Remove it from the population;
28   ; ; Break;
29  End if
30  If (Nutrition ( ) is less than 0 and )
31   Move bacterium to a random position;
32  End if
33 End while
34 End while

Acknowledgments

This work is supported by the National Natural Science Foundation of China (Grant nos. 61174164, 61003208, 61105067, 71001072, and 71271140). And the authors are very grateful to the anonymous reviewers for their valuable suggestions and comments to improve the quality of this paper.