Abstract

Bioinspired optimization algorithms have been widely used to solve various scientific and engineering problems. Inspired by biological lifecycle, this paper presents a novel optimization algorithm called lifecycle-based swarm optimization (LSO). Biological lifecycle includes four stages: birth, growth, reproduction, and death. With this process, even though individual organism died, the species will not perish. Furthermore, species will have stronger ability of adaptation to the environment and achieve perfect evolution. LSO simulates Biological lifecycle process through six optimization operators: chemotactic, assimilation, transposition, crossover, selection, and mutation. In addition, the spatial distribution of initialization population meets clumped distribution. Experiments were conducted on unconstrained benchmark optimization problems and mechanical design optimization problems. Unconstrained benchmark problems include both unimodal and multimodal cases the demonstration of the optimal performance and stability, and the mechanical design problem was tested for algorithm practicability. The results demonstrate remarkable performance of the LSO algorithm on all chosen benchmark functions when compared to several successful optimization techniques.

1. Introduction

In nature, biology species are divers and an organism is any living thing (such as animal, plant, or microorganism) [1]. All their behaviors can show what kind of biological features they have. Some features are universality, such as foraging, reproduction, mutation, and metabolism. And for some organisms, their features are uniqueness and intelligence [2]. The ant possesses division and cooperation behaviors. Bees have special skills in the process of gathering honey. Birds have unique flight principle. The bacterial flagellums play a role of chemotaxis in their moving. Biologic features enable organisms to adapt to the complex living environment in the best way and long-term survival in nature. Real-world optimization problems are similar to biologic survival environment; they all have complex features. Therefore, with the purpose of solving reality complex problem, researchers begin to mimic the biologic phenomena via defining a set of rules and realize those rules on computer [3]. Those rules are called bioinspired optimization technique.

Currently, the bioinspired optimization techniques possessing abundant research results, and we divide all existing algorithms into three major categories: evolutionary computation, swarm intelligence, and others. Widely concerned algorithms are as follows:(1)evolutionary computation:(i)genetic algorithm;(ii)evolutionary programming;(iii)evolutionary strategy;(iv)genetic programming;(v)differential evolution;(vi)neuroevolution;(2)swarm intelligence (SI):(i)particle swarm optimization;(ii)ant colony optimization;(iii)bacterial foraging optimization algorithm;(iv)artificial bee colony;(v)shuffled frog leaping algorithm;(vi)glowworm swarm optimization;(vii)cuckoo search;(viii)firefly algorithm;(ix)harmony search;(x)bat algorithm;(xi)wolf search;(3)other algorithms:(i)artificial immune algorithm;(ii)artificial neural networks;(iii)cellular automata;(iv)cultural algorithm;(v)membrane computers;(vi)brain storm optimization;(vii)ecoinspired evolutionary algorithm;(viii)invasive weed optimization;(ix)dolphin echolocation.

Moreover, these bioinspired optimization algorithms have been widely applied to network optimization [47], data mining [810], production scheduling [1114], power system [15, 16], pattern recognition [17, 18], robotics applications [1921] and so on.

All living organisms have lifecycle, either the commonest ants, butterflies, goldfish around us or the uncommon Antarctic penguins, arctic bear or either the ferocious beast or the meek of poultry. Although different organisms have different lifecycle lengths, they all undergo the process from birth to death. When an original life ends, a new life will generate. The biology evolution of nature follows the “cycle relay” pattern, which is a “life and death alternation” cycle process. This process repeated continuously made the endless life on earth, and biologic evolution become more and more perfect.

Inspired by the idea of lifecycle, in 2002, Krink and Løvbjerg introduced a hybrid approach called the lifecycle model that simultaneously applies genetic algorithms (GAs), particle swarm optimization (PSO), and stochastic hill climbing to create a generally well-performing search heuristics [22]. In this model, authors consider candidate solutions and their fitness as individuals, which, based on their recent search progress, can decide to become either a GA individual, a particle of a PSO, or a single stochastic hill climber.

In 2008, Niu et al. proposed a lifecycle model (LCM) to simulate bacterial evolution from a finite population of Escherichia coli (E. coli) bacteria [23]. In this simulation study, bacterial behaviors (chemotaxis, reproduction, extinction, and migration) during their whole life cycle are viewed as evolutionary operators used to find the best nutrient concentration which is labeled as a potential global solution of the optimization problem.

In 2011, borrowing the biologic lifecycle theory, the Lifecycle-based swarm optimization (LSO) algorithm was proposed for the first time [24]. Then, 7 unimodal unconstrained optimization test functions and constrained optimization test functions as well as engineering problems that include vehicle routing problem (VRP) and vehicle routing problem with Time Windows (VRPTW) were adopted to test LSO algorithm performance [2426]. The above experiments demonstrate that LSO is a competitive and effective approach. In order to evaluate the LSO performance accurately, this paper uses 23 unconstrained benchmark functions to study the effectiveness and stability of LSO.

The rest of this paper is organized as follows. Sections 2 and 3 describe the proposed Lifecycle-based swarm optimization (LSO) technique. Sections 4 and 5 present and discuss computational results. The last section draws conclusions and gives directions of future work.

2. Lifecycle-Based Swarm Optimization

2.1. Chemotaxis Operator

Based on the current location, the next movement will be towards the better places. The optimal individual of population selects this foraging strategy. Since the optimal forager in the current iteration possesses the greatest energy, so he has the ability to seek the better location which with more nutrient resources than previous location in global search scope. And the seeking mode taken by optimal foraging individual is not the same as the migration method of nonoptimal individual and also is not a simple migration or position moving, but a rather powerful foraging strategy, such as chaos search. The better solution was found directly using chaos variable.(1)The current optimization variable is denoted by , and its fitness value is .(2)Generate chaotic variables by logistic mapping: (3)Transform the chaotic motion traverse range to optimize variable domain: where and are the upper and lower boundary of the search space.(4)Compute fitness values: (5)If is better than , then , .

2.2. Transposition Operator

Individuals of selecting nonsocial foraging strategy will randomly migrate within their own energy scope: where is the migration distance of , is a normal distributed random number with mean 0 and standard deviation 1, and are the search space boundary of the individual, and is the range of the global search space.

2.3. Assimilation Operator

Individuals of selecting social foraging strategy will perform assimilation operator. They gain resource directly from the optimal individual in the way of using a random step towards the optimal individual: where is a uniform random sequence in the range , is the best individual of the current population, is the position of an individual who performs assimilation operator, and is the next position of this individual.

2.4. Crossover Operator

In LSO, the crossover operator selects single-point crossover method. One crossover point is selected, string from beginning of individual to the crossover point is copied from one parent, and the rest is copied from the second parent.

2.5. Selection Operator

According to “the survival of the fittest” theory and for ensuring a fixed population size LSO takes a certain method which can make some individuals be retained and the others be eliminated. In this algorithm, the selection operator performs elitist selection strategy. A number of individuals with the best fitness values are chosen to pass to the next generation.

2.6. Mutation Operator

In this algorithm, the mutation operator performs dimension-mutation strategy. Every individual , , one dimension of an individual who was selected according to the probability will re-location in search space: where and are the lower and upper boundary of search space. In the -dimension search space, the is the position of the dimension of the individual; value is in .

3. Algorithm Description

Lifecycle-based swarm optimization is a population-based search technique, evaluation all individuals fitness value, and establishes an iterative process through implementation of six operators proposed above. Each population is composed of a certain number of individuals and meets the clumped distribution. In each iteration, firstly, all individuals need to select foraging strategy and execute foraging operator based on individual’s fitness value and foraging probability generated randomly; then, this is followed by the crossover operation, selection operation, and the mutation operation. Finally, generate the next population which can represent the new solutions. In the optimization process, the optimization operation is random, but the optimize performance shown us are not entirely randomly. It can effectively utilize the historical information to speculate the next solutions, which has the possible of closer to optimum. Such process was repeated from generation to generation and finally converges to the individual and this was the most adaptable process to environments and an optimal solution was obtained. Figure 1 shows LSO algorithm flowchart.

4. Experiments Setting

4.1. Illustrative Examples

To fully evaluate the performance of the LSO algorithm without bias, we employed 23 benchmark functions which were tested widely in evolutionary computation domain to show the quality solution and the convergence rate [27]. These test functions were listed in appendix. In those functions, functions to are unimodal functions, functions to are multimodal functions with many local minima, and functions to are multimodal functions with few local minima.

In order to verify the efficiency of our approach to settle practical problem and test the goodness of LSO, the mechanical design optimization problem was selected as the testing case, which included pressure vessel and schematic diagram of welded beam problem. These are the hybrid system optimization problems.

(1) Pressure Vessel. As shown in Figure 2, pressure vessel was designed to minimize the total pressure vessel weight. There are four design variables: the shell thickness , the thickness of the head , the inner radius , and the length of the cylindrical section . and are discrete values which are integer multiples of 0.0625 in and and are continuous. The pressure vessel problem is stated as follows:

(2) Schematic Diagram of Welded Beam. As shown in Figure 3, schematic diagram of welded beam problem was designed to minimize the total cost of welded beam materials. There are four design variables: the welding thickness , weld joint length , the width of the beam , and the thickness of the beam . and are discrete values which are integer multiples of 0.0625 in and and are continuous. The schematic diagram of welded beam problem is stated as follows:

4.2. Settings for Involved Algorithms

We compared the optimization performance of LSO with the well-known algorithms: the standard PSO and the standard GA. In 2006, He et al. proposed group search optimizer (GSO) inspired by the scrounging strategies of house sparrows and employed especially animal scanning mechanism [28]. This algorithm appeared to be overpowering compared to the GA, PSO, EP, and ES on 23 benchmark functions used in this paper. Therefore, LSO was also compared with GSO.

The parameter settings of every algorithm were manually tuned. Each of the experiments was repeated 30 runs, and the max iterations in a run . In every run, with the purpose of making the comparison fairly, the initialization populations for all the considered algorithms were generated using the same population which satisfied the normal distribution. The same population size was . The other specific settings for each of the algorithms are described below.

5. Results and Discussion

A lot of experimental data come from printed research papers have shown that PSO and GA can find the optimum of some functions. But in this paper, it becomes powerless. The cause is that the way of generating initialization population is changed from random distribution method to clumped distribution method. In a sense, the clumped distribution is the special form of random distribution. But as stated before, random distribution is rare in reality and clumped distribution is the commonest. So, the finally optimum solution generated via initialization population of random distribution cannot be applied to illustrate the algorithms perform for solving reality and complex optimization problems.

5.1. Unimodal Functions to

Table 1 presents the optimization results for unimodal functions obtained by all algorithms. Obviously, LSO performs best and finds the global optimum or very near optimum in all cases expect function . Functions have consistent performance pattern across all algorithms. LSO is the best, GSO is almost good, and PSO and GA failed. Function is the step function and consists of plateaus, which has one minimum and is discontinuous. It is obvious that finding the optimum solution by LSO and GSO is easy, but it is difficult for PSO. Function is a noisy quartic function, where random is a uniformly distributed random variable in . On this function, LSO can find the exact optimum, whereas other algorithms cannot do so.

Generally speaking, unimodal benchmark functions to are relatively easy to be optimized. They were mainly used for testing the convergence rate of algorithm, and the satisfactory accuracy is not a major issue. On this kind of functions, LSO has the best optimization accuracy and the fastest convergence rate. It can converge exponentially fast toward the fitness optimum. This conclusion can be illustrated via Figure 4, which shows the progress of the mean best solutions found by these algorithms over 30 runs for all unimodal functions, expect for function . From Figures 4(a) to 4(f), it can be seen that LSO has the best convergence speed, followed by GSO, GA, and PSO. From the beginning of iterations, the convergence rate of LSO is faster and the convergence curve declined rapidly. At the 500th iteration, LSO has found the optimum solution. Moreover, with increasing the number of iterations, the optimum solution was also approached continuously by LSO at a fast rate. Either the convergence curve of other algorithms is much slower or looks like a horizontal line and seems to stagnate.

5.2. Multimodal Functions with Many Local Minima to

Table 2 presents the optimization results on multimodal functions with many local minima to . These functions were mainly used to test the capability of global seeking optimum and escaping from the local optimum. The quality of the final results is more crucial element. It can also be found from Table 2 that, for 4 out of 6 functions, LSO generated better results than the other four algorithms. The two exceptions are functions and . On , although LSO performs slightly worse than GA, the standard deviation of LSO is highly superior to that of GA. On , GSO performs moderately better than LSO. Figure 5 shows the convergence curves of functions to . LSO converges very fast to good values near the optimum. In summary, the search performance of the four algorithms tested here can be ordered as LSO > GSO > GA > PSO.

5.3. Multimodal Functions with Few Local Minima to

Functions to are multimodal functions with few local minima and possess rather unique features, which can verify the adaptation of algorithms to the different optimization environment. Table 3 presents the optimization results for functions to . It can be concluded from Table 3 that the order of the search performance of these four algorithms is LSO > GSO > PSO > GA.

For these ten functions, in terms of testing the indicators, LSO was ranked the first on functions , , , , and . For example, the problem shown in Figure 6(a) has five extremes; the bottom point at the deepest hole is the global optimal position and the other holes are deceptive. Figure 6(b) shows convergence results of four algorithms. All algorithms have been quickly in the early iterations. But the GSO, PSO, and GA stagnate before finding the global optimum, and LSO stagnates until it finds it. At the beginning of the searching, there is a number of promising “fox holes,” so the convergence rate of these algorithms is fast. But after a short period, owing to lacking the ability of jumping out of the local extreme, the solutions obtained by GSO, PSO, and GA fall into the “fox holes” deeply, and the evolutionary curve tends to stop. The optimization tactics make LSO escape from the deceptive region and migrate towards the global one. The properties of function and are similar to that of function . Figure 7 shows the same convergence results on functions and .

Functions and are all easy problems, and all algorithms can find the exact optimum solutions. On functions and , LSO, GSO, and PSO yield the exact optimum, while GA yielded the approximate optimum. All algorithms come very close to the global optimum on . Figures 8(a) and 8(c) show the convergence results on functions and . Moreover, we can see that LSO has the fastest convergence speed from Figures 8(b) and 8(d).

5.4. Mechanical Design Optimization Problem

Mechanical design optimization plays an important role in engineering and manufacturing enterprises. In this field, one of the most difficult parts encountered is constraints handling and optimization variables. First, on the test results of proposed algorithm, the best feasible value on these two problems is 6059.72 and 1.7107, respectively. The best feasible solution found by our approach is better than those solutions found by other techniques, listed in other literature [29]. In addition, the standard LSO employed a penalty function to preserve feasibility of the encountered solutions. This proposed method is relatively simple compared to other algorithms introduced to solve constraint the problem, such as the multiobjective evolutionary method, the collaborative evolutionary particle swarm optimization algorithm, dynamic penalty function method, annealing penalty function method, information feedback adaptive penalty function method, multilayer social culture algorithm, and combination of global and local topology particle swarm algorithm.

6. Conclusions

This paper proposed a novel optimization algorithm, LSO, which is based on biologic lifecycle theory. Based on these features of lifecycle, LSO designed six optimization operators: chemotactic, assimilation, transposition, crossover, selection, and mutation. Population is the basic unit of biologic existence. Clumped distribution of population spatial is the commonest pattern. This paper borrows the clumped distribution pattern to generate initialization population. A set of 23 unconstrained benchmark functions and mechanical design optimization problems have been used to test LSO in comparison with GSO, PSO and GA, respectively.

It is worth mentioning that LSO cannot find the optimum on function . Function is a nonconvex function; its global minimum is inside a long, narrow, parabolic shaped flat valley. However, even though the valley is easy to find, convergence to the global minimum is difficult. So our future work would study how to make LSO has the ability of moving quickly along the narrow valley in the local area to the objective function minimum. For instance, gradient-based method is incorporated in the late stage of optimization.

As part of our future work, LSO also could be studied and tested on real-world problems, such as location problem of manufacturing systems, network routing problem of computer engineering, parameter identification problem of industrial engineering, electrical engineering problem, aerospace engineering problem, and bioengineering problem.

Conflict of Interests

The authors declare that there is no conflict of interests regarding the publication of this paper or financial conflict of interests between the authors and the commercial identity.

Acknowledgments

This project is supported by the National Natural Science Foundation of China (Grant nos. 61174164, 51205389, 61105067, 71001072, 71271140, and 71240015), the National Natural Science Foundation of Liaoning province of China (Grant no. 201102200), the General Research Project of Liaoning province of China (Grant no. L2012392), and the Natural Science Foundation of Guangdong Province (Grant nos. S2012010008668 and 9451806001002294).