Abstract

Evolutionary algorithms (EAs) were shown to be effective for complex constrained optimization problems. However, inflexible exploration-exploitation and improper penalty in EAs with penalty function would lead to losing the global optimum nearby or on the constrained boundary. To determine an appropriate penalty coefficient is also difficult in most studies. In this paper, we propose a bidirectional dynamic diversity evolutionary algorithm (Bi-DDEA) with multiagents guiding exploration-exploitation through local extrema to the global optimum in suitable steps. In Bi-DDEA potential advantage is detected by three kinds of agents. The scale and the density of agents will change dynamically according to the emerging of potential optimal area, which play an important role of flexible exploration-exploitation. Meanwhile, a novel double optimum estimation strategy with objective fitness and penalty fitness is suggested to compute, respectively, the dominance trend of agents in feasible region and forbidden region. This bidirectional evolving with multiagents can not only effectively avoid the problem of determining penalty coefficient but also quickly converge to the global optimum nearby or on the constrained boundary. By examining the rapidity and veracity of Bi-DDEA across benchmark functions, the proposed method is shown to be effective.

1. Introduction

A great deal of engineering design problems can be formulated as constrained nonconvex optimization problems which are difficult to be solved with classic mathematical theory. Since swarm intelligence (SI) [1, 2] was introduced by Wang and Beni [3] in 1989, many evolutionary algorithms (EAs) describing the collective behavior of decentralized, self-organized systems, natural or artificial, have provided robust evidence for dealing with incremental nonconvex or other complex optimization [46]. Many successful applications of EAs [79] have been reported to solve engineering problems such as industrial design [1012], transportation [13], commerce [14], and bioinformation [15]. There are also many improvements [16, 17] on the original algorithms. To extend this application to constrained optimization, penalty functions are usually used to handle these multiple constraint [18, 19], which will transform the problems into unconstrained ones but meanwhile make original objective function more complex. Previous researches [2022] have suggested that evolutionary algorithms (EAs) can be widely used to tackle such problems. Many successful applications of EAs have been reported to solve engineering problems such as industrial design [23, 24] and military management [25]. To further expand the application of EAs to more difficult but important problems, Vural et al. carried out further study in analog filter design with evolutionary algorithms [26], Li and Yao presented a new cooperatively coevolving particle swarms for large scale optimization [27], Blackwell provided further study of collapse in bare bones particle swarm optimization [28], Pehlivanoglu enhanced particle swarm optimization with a periodic mutation strategy and neural networks [29], Chen et al. proposed particle swarm optimization with an aging leader and challengers [30], and Naznin et al. suggested a progressive alignment method using genetic algorithm for multiple sequence alignment [31]. Constrained optimization is an important kind of problems solved by EAs, such as the methods proposed by Wang et al. [32, 33], Cai and Wang [34], Krohling and Dos Santos Coelho [35], and Tessema and Yen [36]. Recently Daneshyari and Yen [37] have noted that genetic-based algorithms and swarm-based paradigms are two popular population-based heuristics introduced as EAs for solving constrained optimization problems [3840]. These algorithms have better global search abilities by sharing the principle of natural evolution or swarm intelligence in nature.

Genetic algorithm (GA) is a search heuristic which mimics the process of natural evolution by parallel computing. It became popular through the work of John Holland in the early 1970s, and particularly his book Adaptation in Natural and Artificial Systems (1975). The strong exploration of GA makes it fit for intricate optimization problems [41, 42], but GA’s inborn disadvantages such as slow convergence caused by mutation operator [22] have limited the promotion of this algorithm in actual application. Furthermore, particle swarm optimization (PSO) developed by Kennedy and Eberhart [43, 44] has received more and more attention regarding its potential as a faster global optimization technique [22]. However, it might be caught in the trap of local optimization caused by premature convergence. Thus, special trade-off between exploration and exploitation is required to achieve proper balance between optimization reliability and convergence speed. A feasible method is to combine a variety of EAs then form a hybrid algorithm with strong exploration and exploitation. It is because mutually reinforcing will make hybrid algorithm fit for difficult optimization problems where acceptable solutions can be achieved [4547]. Recently, coevolutionary algorithms (CoEAs) have been extensively studied in solving complex constrained optimization problems [20]. It can be considered as a new form of hybrid algorithm with high efficiency in exchanging information between agents. Although these algorithms can perform better than the standard EAs, inflexible exploration-exploration and improper penalty in general EAs with penalty function would lead to losing the global optimum nearby or on the constrained boundary.

Usually the fitness function, especially around the constrained boundary where there is a global optimum, becomes much more complex because of the mixed constraints and penalty functions. Over-penalty on the forbidden agents which is nearby the global optimum will lose important samples with better guiding function. Under-penalty on the forbidden agents which is far away from the global optimum will weaken strength of converging toward the global optimum. Then inflexible exploration-exploitation will probably miss the global optimum and lead to ill-convergence. In this paper, we propose a bidirectional dynamic diversity evolutionary algorithm (Bi-DDEA) with multiagents guiding adaptive exploration-exploitation to the global optimum. In Bi-DDEA potential advantage is detected by three kinds of agents. The scale and the density of agents will change dynamically according to the emerging of potential optimal area, which play an important role of flexible exploration-exploitation. Meanwhile, a novel double optimum estimation strategy with objective fitness and penalty fitness is suggested to compute, respectively, the dominance trend of agents in feasible region and forbidden region. Penalty fitness will guide agents in forbidden region to nearby constrained boundary with optimal objective fitness, and objective fitness will guide agents in feasible region to global optimum. This bidirectional evolving with multiagents can not only effectively avoid the problem of determining penalty coefficient but also quickly converge to the global optimum nearby or on the constrained boundary. Unlike other penalty methods, the penalty fitness suggested in this paper is not added to the objective function as a penalty term but constitutes the Bi-DDEA as a evaluation term to select optimal forbidden agents independently, which will make a convergence towards the global optimum from another direction in the forbidden region.

This paper is organized as follows. Section 2.1 presents the key elements of bidirectional information processing. Section 2.2 explains how to carry out adaptive exploration-exploitation with multiagents. Section 2.3 describes the outline of Bi-DDEA. Section 3 provides the numerical experimentation with benchmark problems and comparisons with other swarm optimization algorithms. A detailed analysis and conclusion of the algorithm are presented in Section 4.

2. Bidirectional Dynamic Diversity Evolutionary Framework

2.1. Bidirectional Information Processing

Although penalty function method is one of the most common methods to solve constrained optimization problems, the design of penalty item should be fit for the algorithm with which we will deal with practical optimization problems. In order to avoid the complication caused by adding penalty term to the objective function, objective fitness () and penalty fitness () are suggested in this paper to deal with both the agents distributed in feasible region and forbidden region respectively. The objective fitness of all the agents is computed according to the objective function. When an agent emerges in the feasible region, the penalty fitness will be set to zero. When an agent emerges in the forbidden region, the penalty fitness will be computed according to the constraint conditions and sum all the excess value as the penalty fitness.

The general constrained problem formulation that is also called the primal problem can be stated as follows: where represents a vector of real variables subject to a set of inequality constraints and a set of equality constraints . The penalty fitness proposed here can be defined as where

Penalty fitness will be used in Bi-DDEA to judge whether an agent is in the feasible region. Once an agents is updated by Bi-DDEA, will be called to check the decision variable represented by this agent. If the penalty fitness is zero, a feasible decision variable is gotten. Otherwise, the penalty fitness will replace the objective fitness and be regarded as the fitness of the forbidden agent. The fitness function can be expressed as what is shown in (4): where and . As the fitness function is divided into two departments to deal with the feasible agents and the forbidden agents, respectively, bidirectional evolution can be carried out on these two kinds of agents. Thus, the fitness growth of an agent can be defined as where refers to the iteration number and refers to the minimum fitness of all the feasible agents.

, , and are introduced in this paper as potential superiority factors to analyze the sampling information and estimate the situation of both feasible agents and forbidden agents, which will guide bidirectional evolution by distributing newborn agents with adaptive density. The potential superiority factors can be defined as below: where and refer to the minimum and the maximum objective fitness in feasible region, and refer to the minimum and the maximum penalty fitness in forbidden region, refers to the maximum fitness growth of feasible agents in previous generations, refers to the minimum fitness growth of feasible agents in current generation, refers to the maximum fitness growth of forbidden agents in current generation, refers to the minimum fitness growth of forbidden agents in previous generations, and is a reliability factor which is defined as below: where will increase gradually from zero to one in the process of optimization.

2.2. Dynamic Diversity Evolution

Three types of agents, partition agents (PAs), basic agents (BAs), and creative agents (CAs), are combined together in Bi-DDEA to carry out adaptive exploration-exploitation according to the guiding of , , and . Partition agents are distributed in feasible region uniformly and increase gradually, which plays a role of brute search and ensures exploration throughout the course of optimization. Basic agents are distributed in different partitions according to the property of partition agents, which plays a role of transition media between partition agents and creative agents and also between exploration and exploitation. Creative agents are distributed around basic agents according to the property of basic agents, playing a role of exploitation in global area and also exploration in local partitions.

The evolutionary process of Bi-DDEA is mainly carried out by the rebirth of new agents around the senior ones. one PA will generate some BAs in a local region, and one BA will generate some CAs in a local region around the BA. Then some CAs with higher potential superiority will be selected as the next BAs. BAs alternate with CAs, which is also affected by PAs. The focus in Bi-DDEA is no longer on the position updating of previous agents but on the density distribution of newborn agents. The density of newborn agents is controlled by regulating the range and the scale of newborn agents according to the feedback of sampling information such as , , and .

The range of newborn agents is determined by two items in Bi-DDEA. The first one is narrowing range with ratio, which means that the child agents will inherit from their parent. The second one is regulated range with , which means that the child agents will study from the sampling information and adjust their range toward . Then, the range of those agents of which the fitness is lower will be set to a larger value and vice versa. To make information fusion, is introduced as study factor to integrate the two items in the form of . It can be described specifically as follows: where , , and refer to the range of , , and in each iteration. and refer to the correction factor of and . The indexes, , , and , indicate that the current parameter attaches to , , and .

When newborn CAs are distributed, the sampling information will also affect the property, range, of PAs and the changing of Pas’ property will also affect the property, range, of CAs’. This process is called self-correcting in range which can be described as follows: where and refer to the range of before and after the self-correction. The equation set (13)–(16) is also called the range-dividing principle in Bi-DDEA. Larger range caused by smaller will make weak agents explore outward to discover more dominant position. On the contrary, smaller range caused by larger will converge agents to exploit the emergent region fastly.

The scale of newborn agents is also determined by two items. The first one is the scale which is the same with their parent, which means the child agents will inherit from their parent. The second one is regulated scale with or , which means the child agents will study from the sampling information and adjust their scale toward or . Then, the scale of those agents of which the fitness growth is larger will be set to a larger value and vice versa. To make information fusion, is also introduced as study factor to integrate the two items in the form of . It can be described specifically as follows: where , , and refer to the scale of , , and in each iteration. and refer to the correction factor of and . The indexes, , , and , indicate that the current parameter attaches to , , and .

When newborn CAs are distributed, the sampling information will also affect the property, scale, of PAs and the changing of Pas’ property will also affect the property, scale, of CAs’. This process is called self-correcting in scale which can be described as follows: where and refer to the scale of before and after the self-correction. The reason to select as a self-correcting factor is that the focus of will transition from to and large number of agents should be used to exploit the area where there always is the optimal agent in the later phase of optimization. While the algorithm runs, the focus of larger scale agents will translate from complex area to dominance area. Thus, Bi-DDEA can carry out specific exploration at the earlier stage of optimization and develop fast targeted exploitation at the later phase of optimization. In addition to this, a swarm scale floor (SSF) is introduced to reduce quickly the number of agents in the area where there is little growth in fitness. It not only saves sample resource but also maintains global exploration with fewer agents, which can be described as follows:

Swarm scale ceil (SSC) is also introduced to prevent agents from excessive generating. Considering the forbidden agents will not be the global optimal one, the scale floor of forbidden agents will not be confined by SSF but limited to one or more. Thus, the scale of agents will be corrected in the following form:

The equation set (17)–(21) is also called the scale-setting principle in Bi-DDEA. Larger scale caused by larger or will not only reduce the loss of information in volatile region but also strengthen the ability of both exploration and exploitation towards potential advantage areas. On the contrary, smaller scale caused by smaller or will save much time that might be wasted in poor areas.

2.3. The Proposed Bi-DDEA

Bi-DDEA mainly consists of bidirectional information processing suggested in Section 2.1 and dynamic diversity evolution suggested in Section 2.2. The algorithm flow is described as follow:(1)to divide searching space into a number of partitions with partition agents (PAs) and initialize the properties of each PA;(2)to distribute basic agents (BAs) in each partition according to the properties of PA;(3)to divide BAs into two types with the constrained conditions and, respectively, compute their objective fitness, penalty fitness, and fitness growth according to formulas (4), (2), and (6);(4)to estimate the potential advantage of BAs both in feasible region and forbidden region according to formulas (8), (10), and (11);(5)to assign the attributes of BAs according to formulas (13) and (17);(6)to distribute creative agents (CAs) around each BA according to the range and scale of the BA;(7)to divide CAs into two types with the constrained conditions and, respectively, compute their objective fitness, penalty fitness, and fitness growth according to formulas (4), (2), and (6);(8)to estimate the potential advantage of CAs both in feasible region and forbidden region according to formulas (8), (10), and (11);(9)to assign the attributes of CAs and PAs according to formulas (14), (15), (18), and (19);(10)to correct the scale of each agents according to formulas (20) and (21);(11)to select better CAs as next BAs according to the scale of PA in each partition;(12)to judge whether the breaking conditions are met. If so, then it continue to next step; otherwise it jumps to step (3);(13)to select the best BA as present global optimal agents and translate it into a PA;(14)to judge whether the breaking conditions are met. If so, then it continue to next step; otherwise it jumps to step (1);(15)output the results.

Obviously, all the agents that are divided into two types in Bi-DDEA will evolve along two directions with the dynamic diversity evolution method under the guidance of bidirectional information analysis proposed in this paper.

3. Benchmark Problems

The performance of Bi-DDEA is evaluated by comparing its rapidity, accuracy, and universality with those of PSO algorithm and GA. To be meaningful, some benchmark functions that are commonly used in randomized population-based algorithms testing are selected as objective functions. This gives a fair comparison being as far as possible free from biases that favor one style of algorithm over another. These functions include Rastrigin [4853], Rosenbrock [4853], Griewank [4952], Needle-in-a-Haystack [5456], Shubert [48, 55, 57, 58], and Schaffer [49, 50, 54, 55, 57], as shown in Figure 1. As one of the most difficult but important issues in designing optimization algorithms is managing the balance between exploration and exploitation, these test functions of which the peak number and the peak distribution are different will really challenge both the exploration and the exploitation behavior of the algorithms. As Bi-DDEA is used to find the maximum value, we select the opposite value of objective function as fitness in the following evaluation process. In addition, the benchmark problems will be modified into constrained ones with constrained condition as shown in the following formula: where refers to the dimension of decision variable.

For GA, the initial parameters are set to their default values as described below:(1)population size: 100;(2)probability of mutation: 0.05.

For PSO, the initial parameters are set to their default values as described below:(1)population size: 40;(2)inertia weight (): 1, then decreases from 1 to 0.1 during the iteration progress.Dynamic equation as follows:

For Bi-DDEA, the initial parameters are set to their default values as described below:(1)partition dividing: from to ;(2)learning factor (): ;(3)SSF: ;(4)SSC: ;(5)outside loop number: from to ;(6)inner loop number: from to .Dynamic equations are given as what formula (13)–formula (21) shown to compute , , , , , and in Section 2.2. Although the dynamic equations of Bi-DDEA are more complex than those of PSO, it is time-saving because it can determine the next distribution of many sampling points once.

3.1. Rapidity

This section compares the rapidity of Bi-DDEA with the results obtained by GA (genetic algorithm) and PSO (Particle swarm optimization) when they are applied to the six benchmark functions mentioned above. Sampling time is used to represent the optimization time logically. It refers to the cumulative time spent by every sample to calculate their fitness according to the coordinates. It can eliminate the program editor, environment, and other external factors affecting the evaluation of the optimization speed, which allows us to compare the rate of these algorithms more explicitly. An algorithm with different program design or in different computer environments will have a different performance. For example, a good programmer can implement an algorithm process with streamlined code to improve operational efficiency, but it is difficult to reflect its advantages when running in CPU with low frequency. In the experimental comparison, we would probably get wrong result due to these external factors. So, we use sampling time as reference to analyze their speed characteristics of the most original. Since the time used for each sample is almost equal, we use the number of samples to replace sampling time in this paper.

Figures 2, 3, 4, 5, 6, and 7 show some of the algorithms’ qualified run-length distributions (RLDs, for short) based on sampling time when solving some benchmark functions. The solid line, dashed line, dash-dot line, and dotted line represent, respectively, the emerging of Bi-DDEA, LP-DDEA, PSO, and GA. On the whole, the optimization speeds of both PSO and Bi-DDEA are similar to each other and higher than that of GA. LP-DDEA is better sometimes than PSO and Bi-DDEA across functions Rosenbrock and Shubert, but slower than Bi-DDEA across the other problems. From this point of view, the algorithm designed in this paper is quite advantageous. Although PSO’s convergence is mostly faster than that of Bi-DDEA in the early stage sometimes, the smallest first hitting times across different benchmark functions are mainly obtained by Bi-DDEA. It is because Bi-DDEA can dynamically give divers and sufficient exploration to obtain more accurate evolution in two directions. When a more fit area is found by exploration, fast exploitation will be carried out rapidly in this area. But the exploitation will not take up all the system resources. There are still many agents continuing to explore in both feasible region and forbidden region, and more and more exploitation agents will be changed into exploration agents quickly if exploitation can not find much more fit results. Thus, more and more samples will emerge at the constrained boundary and better local optimal area which probably include the global optimum. When the global optimum locates at the constrained boundary, the convergence speed will be much more quick. There is no algorithm comparison without exception. PSO can show an amazing speed when some particles drop into a better space in the experiments. Generally speaking, exploration and exploitation in Bi-DDEA are almost synchronic and exploration in Bi-DDEA is more intensive than that of PSO. So Bi-DDEA can come across global optimal result earlier in these experiments.

3.2. Veracity

High accuracy and global superiority are the key indicators of good algorithm in quality evaluation. From Tables 1, 2, 3, 4, 5, and 6 we also show that some optimal fitness get by four algorithms across six benchmark functions. Obviously, we can see from the results that Bi-DDEA and PL-DDEA find better fitness than what GA found, and PSO is sometimes trapped in local optimum. But on the contrary, PSO can also find more precise results than DDEA if PSO is not trapped in local optimal area. In fact, PSO and DDEA both have higher accuracy because of the rapid convergence of sample swarm, an exploitative behavior, in optimal area. The reason DDEA can mostly find that the minimum fitness is that DDEA’s global search capability is stronger than that of PSO, and PSO sometimes didnot avoid getting into local minimum across Schaffer function, Needle function, and Griewank function in these experiments.

3.3. Samples Distribution

To avoid falling into local minimum and find better optimum or constrained boundary, a good algorithm should have the ability to explore the unknown feasible region. As a better cases analysis, we now consider the distribution of all the samples through the whole optimization process, as shown in Figure 8. Sample distribution shown in the figure of the four columns is produced, respectively, by GA, PSO, and LP-DDEA and Bi-DDEA from left to right. The figures of sample distribution in a row are produced by these algorithms across the same benchmark function, and the benchmark functions used in each row are different from each other. These benchmark functions from top to bottom are Rastrigin, Rosenbrock, Griewank, Needle-in-a-Haystack, Shubert, and Schaffer. From these figures, it can be intuitively found that GA has the strongest global exploring ability, followed by LP-DDEA and Bi-DDEA, and finally PSO. This is why GA’s performance is better than PSO when searching the extreme point in function Needle-in-a-Haystack. But it also takes GA a long time to explore everywhere without any convergence exploitation in feasible region, which not only reduces optimization speed but also affects optimization accuracy to some extent.

In addition, Figure 8 shows that Bi-DDEA and LP-DDEA both have the ability of finding the constrained boundary where there is the global optimum in all probability. But Bi-DDEA can find a more clear boundary with more effective sample distribution. It is because bidirection evolution suggested in Bi-DDEA carries out special convergence from forbidden region to better constrained boundary. As the global optimum always on the constrained boundary in most constrained optimization, Bi-DDEA is faster than the LP-DDEA when dealing with constrained problems in engineering application as shown in Section 3.1.

4. Analysis and Conclusion

Experimental results have shown notably that Bi-DDEA with bidirection evolutions is more successful in solving non-convex constrained optimization problems. The wonderful performance profits from the bidirection information processing and the dynamic diversity evolution carried out by multi-agents consisting of PAs, Bas, and CAs both in feasible region and forbidden region. adaptive exploration-exploitation in feasible region makes Bi-DDEA find out most of the local optimum which include the global optimum. It indicates that the suggested algorithm has the ability of discovering the character of function distribution, which provides robust evidence for the global searching ability of Bi-DDEA. Another evolution direction from the forbidden region is detected specially by agents which are distributed in infeasible region inevitably. The mechanism of Bi-DDEA does not penalize these infeasible agents with large penalty to eliminate their guidance, but to make full use of their guidance to the constrained boundary. When a feasible agent is generated by a forbidden agent, it will get much higher fitness growth according to its present fitness. Thus, more and more samples will be distributed on the boundary with higher fitness as shown in Figure 8.

Bi-DDEA is not sensitive to initial population in most cases. From the mechanism analysis of Bi-DDEA, we can find that sample size in each iteration is regulated according to the feedback sampling information. Even though the setting population is different from the need greatly in the initial phase, Bi-DDEA will adjust the population scale automatically to a needed degree through a few iterations. It is worth noting that some special kinds initial partitions are able to improve efficiency of optimization algorithm, and these kinds of initial partitions are called sensitive initial conditions. How to determine the sensitive initial conditions will also be one of future research aspects for improving the performance of Bi-DDEA in practical application.

To let Bi-DDEA adapt to diversified and comprehensive constrained environment easily is another important goal of this paper. Although the six benchmark functions modified with the constrained condition can not give full instructions of Bi-DDEA’s generalization capability, the samples distribution has provided robust evidence for the predominance of bidirection dynamic diversity evolution. Many more experiments and practices are needed for verifying the validity of this optimization method. In fact, to find exactly the types for which an optimization method fit will be more important and useful than to improve hardly the generalization capability of the method. In future study, surveying and concluding the best parameters of Bi-DDEA to match different types of optimization problems will be done to expand the scope in which Bi-DDEA can give better results. There are also other special application fields in which Bi-DDEA can be attempted to apply, such as discrete optimization problems [59], dynamic optimization problems [60], and multiobjective optimization problems [37, 61, 62]. So much more potential applications of Bi-DDEA are waiting for exploring and exploiting in future studies.

Acknowledgments

This work was supported by the National Natural Science Foundation of China under Grant no. 61074020 and the Fundamental Research Funds for the Central Universities.