Abstract

Over the last few decades, evolutionary algorithms (EAs) have been widely adopted to solve complex optimization problems. However, EAs are powerless to challenge the constrained optimization problems (COPs) because they do not directly act to reduce constraint violations of constrained problems. In this paper, the robustly global optimization advantage of artificial bee colony (ABC) algorithm and the stably minor calculation characteristic of constraint consensus (CC) strategy for COPs are integrated into a novel hybrid heuristic algorithm, named ABCCC. CC strategy is fairly effective to rapidly reduce the constraint violations during the evolutionary search process. The performance of the proposed ABCCC is verified by a set of constrained benchmark problems comparing with two state-of-the-art CC-based EAs, including particle swarm optimization based on CC (PSOCC) and differential evolution based on CC (DECC). Experimental results demonstrate the promising performance of the proposed algorithm, in terms of both optimization quality and convergence speed.

1. Introduction

Optimization problems occur in many disciplines, for example, in engineering [1], physical sciences [2], social sciences [3], and commerce [4]. In real-world applications, optimization is carried out under certain physical limitations, with limited resources. If these limitations can be quantified as equality or inequality constraints on the variables, then a constrained optimization problem can be formulated whose solution leads to an optimal solution that satisfies the limitations imposed [5].

Constraint optimization problem, where the objective functions are optimized under given constraints, isvery important and frequently appears in practical applications. The general constrained optimization problem with inequality, equality, upper bound, and lower bound constraints is defined as follow:

where are -dimensional variables, is an objective function, is inequality constraint, is equality constraint, and is the number of inequality or equality constraints. The values and are the lower bound and the upper bound of , respectively. The upper and lower bounds define the search space. The inequality and equality constraints define the feasible region. Therefore, the point in the feasible domain or the boundary is a feasible point, otherwise it is not feasible.

Due to the complexity of constraint optimization problems, traditional evolutionary algorithms are difficult to solve. Meanwhile, evolutionary algorithms are search techniques mostly based on unconstrained optimization problems, the optimum solution may not be accurate when utilizing these to solve constrained optimization problems. For this reason, it is necessary to combine a suitable constraint handling technique to handle constraint optimization problems. Researchers have proposed various kinds of constraint handling techniques in recent years. The specific content will be discussed in the section below.

For constrained problem, it is widely acceptable that, during the search process, the ability of considering the fitness and feasibility improvement is beneficial for the optimizer to find out the feasible optimal solution more effectively and efficiently than the existing search operators. For this purpose, a novel artificial bee colony based on constrained consensus strategy (ABCCC) is elaborated. Artificial bee colony (ABC) algorithm proposed by Karaboga is a latest heuristic algorithm, which is inspired by the foraging behavior of honey bees for numerical optimization problems [6]. Compared with differential evolution (DE) and particle swarm optimization (PSO), ABC algorithm has two distinct advantages: (1) ABC is very good in terms of the local and the global optimization. (2) ABC is flexible, robust and simple to use. It can be used efficiently in the optimization of multimodal and multi-variable problems. For driving individuals moving towards the feasible region, the constraint consensus (CC) strategy is employed into ABC algorithm. The CC strategy has the following advantages: (1) CC strategy is simple to implement and that has been proved effective in the past tests [7]. (2) CC has minor calculation and best calculating stability. To effectively combine ABC algorithm and CC strategy, the treatment of infeasible individuals is re-designed. That is, some of the infeasible individuals are evolved by CC strategy, the remaining infeasible ones and feasible individuals are reproduced by ABC algorithm. To verify the characteristics of ABCCC, 10-, and 30- CEC2017 benchmark functions are adopted to test its performance. The experiment shows that, in most cases, ABCCC converges faster and has an exact precision in terms of convergence accuracyand it can consider the problem of population diversity well. The proposed ABCCC is highly competitive than DECC and PSOCC and this algorithm is utilized to compare with famous constraint algorithms, CALSHADE [8] and UDE [9]. The experimental results show that ABCCC outperforms the other algorithms. Many variants of ABC algorithm, such as CABC [10] and GABC [11], have been proposed by researchers in recent years. The proposed ABCCC algorithm is also compared with CABCCC and GABCCC. ABCCC algorithm performs better than the other two algorithms.

This paper is organized as follows. After the Introduction, Section 2 elaborates the constraint handling techniques in detail. Section 3 states the related works before describing the proposed algorithm in Section 4. Detail of the proposed new algorithm is in Section 4. The experimental results and analysis are presented in Section 5. Finally, the conclusion is given in Section 6.

2. Constraint Handling Techniques

A great deal of work has already been undertaken on constraint handling techniques. The constraint handling techniques can be divided into four categories: (1) penalty functions; (2) special representations and operators; (3) separation of constraints and objectives; (4) hybrid methods. Substances of each category are briefly described below.

2.1. Penalty Functions

This method was originally proposed by Richard Courant in the 1940s and was later expanded by Alice and David [12]. It is one of the main approaches often used by researchers. The idea of penalty functions is to transform a constrained optimization problem into an unconstrained one by adding (or subtracting) a certain value to the objective function based on the amount of constraint violation present in a certain solution [13]. The general formulation of the penalty function is

where

where and are the functions of the inequality constraints , and the equality constraints , respectively. and are the positive constants that are normally called “penalty factors”.

There are different types of the penalty functions, such as: (1) static penalty [14]; (2) dynamic penalty [15]; (3) death penalty [16]; and (4) adaptive penalty [17]. The main problem with penalty functions is the “ideal” penalty factor which will be adopted. If the penalty is too high, the evolutionary algorithms will be pushed inside the feasible region quickly. On the other hand, if the penalty is too low, a lot of the search time will be spent exploring the infeasible region because the penalty will be negligible with respect to the objective function [18].

In recent years, some modern constraint handling approaches, which use penalty functions deserve special consideration, since they are highly competitive. Runarsson and Yao [19] proposed SR method to achieve a balance, between objective function and the overall constraint violation, stochastically. An interesting aspect of the approach is that it doesn’t require the definition of a penalty factor. Instead, it requires a user-defined parameter called , which determines the balance between the objective function and the penalty function. The basic form of the method is presented in Table 1.

The main advantage of SR is its simplicity. However, it can’t guarantee any expected diversity measures since its ranking is stochastic.

The -constraint handling method was proposed in [20] in which the relaxation of the constraints is controlled by using the parameter.

where is the generation counter and is the control generation. The recommended parameter ranges are and [20]. The level comparisons are basically defined as a lexicographic order, in which the violation precedes the objective value, because the feasibility of a point is more important than the minimization of its objective value [21].This method has the ability to maintain reasonable diversity. However, extra parameters have been a negative issue for this method.

2.2. Special Representations and Operators

Some researchers have developed special representation schemes to tackle a certain (particularly difficult) problem for which a generic representation scheme might not be appropriate.

A more intriguing idea is to transform the whole feasible region into a different shape that is easier to explore. The most important approaches designed along these lines are the “homomorphous maps” [22]. This approach performs a homomorphous mapping between an -dimensional cube and a feasible search space (either convex or non-convex). The main idea of this approach is to transform the original problem into another function that is easier to optimize by an evolutionary algorithm.

The Homomorphous Maps (HM) was the most competitive constraint handling approach for some time. However, the implementation of the algorithm is complex, and the experiments reported required a high number of fitness function evaluations [23].

2.3. Separation of Constraints and Objectives

Unlike penalty functions which combine the value of the objective function and the constraints of a problem to assign fitness, these approaches handle constraints and objectives separately.

Superiority of feasible points [24, 25], the idea of this technique is to assign always a higher fitness to feasible solutions. This approach considers solutions under the feasibility rules when selecting individuals. Feasible ones are always considered better than the infeasible ones. Two infeasible individuals are compared based on their constraint violations. Then the feasible ones are compared based on their objective function values only. The main disadvantage of this approach is the losses in diversity between solutions.

In [26], a multi-objective method with local and global search operators was proposed to solve constrained optimization problems. In this method, each constraint is treated as an objective to be optimized. One of the main drawbacks of this approach is the extra number of parameters and the time needed.

2.4. Hybrid Methods

Within this category, some methods may be considered when coupled with another technique.

Chen et al. [27] proposed a hybrid EA that integrates the penalty function method with the primal-dual method. This approach is based on sequential minimization of the Lagrangian method.

Bernardino et al. [28] proposed a GA hybridized with an AIS. The idea is to adopt as antigens some feasible solutions and evolve (in an inner GA) the antibodies (i.e., the infeasible solutions) so that they are “similar” (at a genotypic level) to the antigens.

3.1. Classical ABC Algorithm

In a natural bee swarm, there are three kinds of honey bees to search foods generally, which include the employed bees, the onlookers, and the scouts (both the onlookers and the scouts are also called unemployed bees). The employed bees search the food around the food source in their memory, meanwhile they deliver their food information to the onlookers. The onlookers tend to select good food sources from those founded by the employed bees, then further search the foods around the selected food source. The scouts are translated from a few employed bees, which abandon their food sources and search new ones. In a word, the food search of bees is collectively performed by the employed bees, the onlookers, and the scouts [29].

By simulating the foraging behaviors of honey bee swarm, Karaboga recently invented ABC algorithm for numerical function optimization [6]. The pseudo code for the ABC algorithm is listed in Table 2 and the detail descriptions are given below.

In initialization phase, the algorithm generates a group of food sources corresponding to the solutions in the search space [30]. The food sources are produced randomly within the range of the boundaries of the variables.

where . SN is the number of food sources and equals to half of the colony size. is the dimension of the problem, representing the number of parameters to be optimized. and are lower and upper bounds of the th parameter. The fitness of food sources will be evaluated. Additionally, counters which store the number of trials of each bee are set to 0 in this phase.

In the employed bees’ phase, each employed bee is sent to the food source in its memory and finds a neighboring food source. The neighboring food source is produced according to Equation (6) as follows.

where is a randomly selected food source different from is a randomly selected dimension. is a random number which is uniformly distributed in range [−1, 1]. The new food sourceis determined by changing one dimension on . If the value in this dimension produced by this operation exceeds its predetermined boundaries, it will set to be the boundaries.

The new food source is then evaluated. A greedy selection is applied on the original food source and the new one. The better one will be kept in the memory. The trials counter of this food will be reset to zero if the food source is improved, otherwise, its value will be incremented by one.

In the onlooker bees’ phase, the onlookers receive the information of the food sources shared by employed bees. Then they will each choose a food source to exploit, depending on a probability related to the nectar amount of the food source (fitness values of the solution). That is to say, there may be more than one onlooker bee choosing the same food source if the source has a higher fitness. The probability is calculated according to Equation (7) as followed.

After food sources have been chosen, each onlooker bee finds a new food source in its neighborhood following Equation (6), just like the employed bee does. A greedy selection is applied on the new and original food sources, too.

In scout bees’ phase, if a food source hasn’t been improved for a predetermined cycle, which is a control parameter called “limit”, the food source is abandoned and the bee becomes a scout bee. A new food source will be produced randomly in the search space using Equation (5), as in the case of initialization phase.

The employed, onlooker and scout bees’ phase will recycle until the termination condition is met. The best food source which presents the best solution is then outputted.

3.2. Constraint Consensus Strategy

CC strategy, using a variety of projection algorithms to solve the feasibility problems with nonlinear and nonconvex constraints. The key idea is to help a currently infeasible solution to quickly move to the feasible region or move close to the feasible region by constructing a consensus among the currently violated constraints [7, 31, 32].

Before elaborating the content of CC strategy, it is more significant to demonstrate some concepts. The feasibility vector for an individual constraint is the vector extending from an infeasible point to its orthogonal projection on the constraint [33]. The measure, of how close an infeasible point to feasibility, is the minimum Euclidean distance between the point and the feasible region, referred to here as the feasibility distance [33].

The first step is to find the feasibility vector for each constraint that is violated at the current point . The feasibility vector is calculated according to the following formula:

where is the gradient of the constraint, and is its length, is the constraint violation for satisfied constraints, and is the right side of the constraint, is +1 if it is necessary to increase to satisfy the constraint, and −1 if it is necessary to decrease to satisfy the constraint.

The consensus vector is constructed by component-wise averaging of the feasibility vectors for the violated constraints. Let represent the number of violated constraints that have variable as a component, represents the component for variable in the feasibility vector for the th constraint, and represents the sum of the for variable over the feasibility vectors for all of the violated constraints. The component of the consensus vector for variable is then given by . If this vector is too short, then the iterations are halted with an unsuccessful outcome [7].

The constraint consensus algorithm is summarized in Table 3. The algorithm halts when every constraint has a constraint violation of zero or a feasibility distance is less than , i.e., NINF is zero.

4. Constraint Consensus Update-Based ABC Algorithm

In this section, the overall flow chart of the proposed algorithm is introduced and discussed first.

4.1. Overall Flow Chart

The overall process of the proposed algorithm is illustrated in Figure 1. In Figure 1, a population with size NP is initialized stochastically. Suppose that there are IF infeasible individuals in the initialized population. New offspring of the infeasible individuals, but not all IF infeasible individuals, are generated based on the CC strategy introduced in Section 3.2. The classical generation strategy of ABC algorithm is applied to remaining infeasible individuals and all feasible individuals with size NP-P. The aim is to save computing time, as well as to maintain the diversity of the population. For each newly generated individual, if it is better than its corresponding parent, it survives in the next generation.

The pseud-code of the algorithm is shown in Table 4 and will be explained below.

4.2. Detailed Steps of the ABCCC

In this algorithm, a new update operator that combines the concept of the CC method with the traditional mutation operator is proposed. This proposal will drive infeasible points to a better search space rather than random movements. The effect of this approach is to quickly reduce the total violation. Note that the proposed update operator will only be beneficial for infeasible individuals, and the update operator based on ABC method will be used for remaining individuals (including feasible and infeasible individuals).

First, the population is initialized by the following formula:

where andare the upper and lower bound for decision variable and is the dimension of the variables.

Then, for some of the infeasible solutions with size , new offspring are generated as following:

where is the value obtained by the constraint consensus strategy and is a random factor in the range of [0.4, 0.9].

For the remaining NP-P individuals (some infeasible and all feasible ones), the update method based on ABC algorithm is adopted.

For each newly generated individual, if it is better than its corresponding parent, it survives in the next generation, and the whole population is sorted based on the fitness values or constraint violations. In this paper, individuals are selected based on the following criteria: (1) between two feasible solutions, the smaller fitness value is (for the minimization problem), the fitter individual is; (2) a feasible solution is always better than an infeasible one; and (3) between two infeasible solutions, the one having the smaller sum of constraint violation is selected.

Equality constraints are converted to inequalities by the following form, where is a small tolerance value, i.e., 10−4.

5. Results and Analysis

In this section, we present and analyze the performance of the proposed ABCCC algorithm.

5.1. Test Function

The algorithm was tested by solving the set of benchmark problems that were introduced in the CEC 2017 competition on Constrained Optimization [34].

These problems have different mathematical characteristics, such as the objective function or the constraints are either linear or nonlinear. The constraints are either equality or inequality type. The objective function is either unimodal or multimodal. And the feasible space may be very tiny compared to the search space.

The 10 test functions are listed below, where is the number of decision variables, and both and are constants.(1) C01(2)C02(3)C03(4)C04(5)C05(6)C06(7)C07(8)C08(9)C09(10)C10

5.2. Experiments Settings

Parameters setting of the proposed ABCCC algorithm include population size NP, which was 100 and the percentage of the selected infeasible individuals, which was 50%. The random factor was 0.5. The total number of iterations was set to 100 and the number of independent runs for each problem was set to 10. Parameters setting of the DECC and PSOCC algorithm were similar to those of ABCCC, specifically, the scaling factor of DE was set to 0.5, and the crossover rate CR was 0.4. For PSO, the learning factors and were usually equal, and value was 1.4. The inertia weight was set to 0.8. The parameters of CC strategy were set as follows: tolerance value of feasibility distance was set to 10−6; movement tolerance was 0.0001 and a preset number of iterations was 10. For equality constraints the tolerance value was set to 10−4.

According to the [33] , there are many variations of the original CC method which have been proposed, such as: (1) feasibility-distance-based (Fdnear/Fdfar) algorithms, (2) average direction-based (Dbavg) algorithm, (3) maximum direction-based (Dbmax) algorithm, (4) direction-based and bound-based (DBbnd) algorithm. They differ by the way they construct the consensus vector. In the basic CC method, all feasibility vectors are treated equally and the movement is created by averaging nonzero components of the feasibility vector. The feasibility-distance-based algorithms use the length of the feasibility vector associated with each violated constraint to set the consensus vector. In the “near” mode, the consensus vector is set equal to the shortest feasibility vector. In the “far” mode, the opposite is true. The Dbavg method decides the direction of movement in a dimension by a simple count of the number of votes for positive or negative movement, and the magnitude of the movement is decided by averaging the projections in the winning direction. Dbmax decides the direction of the movement based on the most common sign among the components of the feasibility vectors, whether positive or negative. Then, the largest proposed movement in the winning direction is set as the consensus vector. In the DBbnd method, the size of the movement in each component depends on the types of constraints that include that variable. Movements in the selected direction suggested by equality constraints are totaled, for inequalities only the largest movement in the selected direction is added [33].

Therefore, for some of the infeasible individuals with size , the above methods are used respectively. In addition, the newly proposed algorithm (ABCCC) is compared with the original algorithm (DECC). Moreover, the DE optimizer of the original algorithm is also replaced with PSO. That is, all the remaining individuals (including feasible and infeasible) evolved with PSO. The results of the comparison are shown in the next section.

5.3. Results

In this section, ABCCC, PSOCC, and DECC are first tested. Then ABCCC is compared with different CC variants. Next, the effect of parameter in ABCCC is analyzed. And next, ABCCC is compared with other advanced algorithms against CEC2017. Finally, the comparison results of ABCCC and ABC’s variants are displayed.

5.3.1. ABCCC and Other EACC against CEC2017

The numerical comparison results for both 10- and 30- test problems are presented in Table 5. In order to observe the performance of the algorithm more intuitively, the convergence curves for each test function are shown in Figure 2. Box plot results are shown in Figure 3.

In the 10- case, from Table 5, clearly, the proposed ABCCC algorithm could obtain more suitable solutions with smaller fitness values than PSOCC and DECC, especially for the best and average results in most cases (C01, C02, C03, C04, C05, C06, C07, and C09). From the curves shown in Figure 2, it was proved that ABCCC owned a faster convergence rate than others and obtained the more ideal global solutions. Nevertheless, there were exceptions in a few test functions (C08 and C10). For these two functions, the optimal solutions were obtained by DECC.

With the increase of dimension, it was more difficult and time-consuming to find the minimum value by evolutionary algorithms. It was foreseeable that the results of 30- were worse than that of 10-. Table 5 demonstrated that, for the 30- results, the proposed ABCCC algorithm displayed its instability. From Table 5, for C01, C02, and C03, the best solutions obtained by PSOCC were smaller than ABCCC and DECC. Then ABCCC outperformed DECC. For the remaining test functions (C04, C05, C06, C08, C09, and C10), ABCCC got the best solutions, but the solutions were distinctly larger than the ones in the case of 10 dimensions. For the special case C07 function, both the best value and the average value of 30- was smaller than the ones of 10-.

Figure 3 shows the box plots of the algorithms. For each box, the central mark indicates the median, and the bottom and top edges of the box indicates the 25th and 75th percentiles, respectively. The whiskers extend to the most extreme data points not considered outliers, and the outliers are plotted individually using the “+” symbol. From Figure 3, ABCCC succeeded on most test functions (C01, C02, C03, C04, C05, C06, C07, and C09).

5.3.2. ABCCC with Different CC Variants Against CEC2017

The comparison results for ABCCC with different CC variants for both 10- and 30- test problems are shown in Table 6. Figure 4 is the convergence curves for each test function. Figure 5 shows the box plot results of the algorithm.

From Table 6, there is no rule to follow for the occurrence of optimal solution. Different algorithms get similar results. Moreover, it was obvious that the optimum value of 30- was larger than the value of 10- except C07 for ABCCC and its variants. There was also not much difference between CC and its variants in 30- results. From Figure 5, it was obviously that there were no regular patterns to judge which algorithm performed better within ABCCC and its variants.

From Figure 4, intuitively, there was no obvious difference among the experimental results of ABCCC and its variants. In particular, for the test functions C04 and C09, the outcomes obtained by different variants were nearly the same because the global optimal solutions had been successfully found out. From Table 6, for C06, although all of ABCCC and its variants might probably find out the global optimal solutions (the same values of best), the mean values and the standard deviation were entirely different, which meant their robustness was various. For C07, it was a little different from C06. The optimal solutions (the same value of best and mean) might be obtained by ABCCC and its variants, but the standard deviation was distinct. In summary, the results indicated that, in most cases, ABCCC and its variants had similar effects.

5.3.3. Performance of ABCCC with Different Values of against CEC2017

In this section, ABCCC was run with different percentages of infeasible individuals in the whole individuals, where is set as 10%, 30%, 50%, 70%, and 90% to evaluate their influence on results. The detailed results are shown in Table 7. It is expected that the search cycle will increase with higher values of .

When comparing the search cycle, the ABCCC algorithm with would be better as it costs minimum time to get results. Considering the quality of solutions, the performance of ABCCC with is slightly better than most other cases based on the average fitness values in Table 7. According to the best fitness values, ABCCC with is superior to ABCCCs with other values of .

Figure 6 shows the convergence curves of C02 function, it is clear that the ABCCC with convergences faster and obtains better optimal solution.

5.3.4. ABCCC Compared with CLSHADE and UDE against CEC2017

From the above mentioned analysis, ABCCC with is regarded as the best algorithm. So we compare it with some famous constraint algorithms, such as CLSHADE [8] and UDE [9]. The detailed results are shown in Tables 8 and 9. From these tables, based on the fitness values for 10- and 30- test problems, ABCCC is able to obtain better optimal solutions for most test functions. But for 10- or 30- C01 and C02, CALSHADE gets the optimal solutions. For C08 and C10, UDE performs best with 10- case, but CALSHADE performs better with 30- case for C10.

The results of our proposed algorithm are compared with ones of CLSHADE and UDE, which shows that the ABCCC algorithm outperforms the other algorithms.

5.3.5. ABCCC Compared with ABC’s Variants with CC Strategy against CEC2017

Many researchers have proposed many other variants of ABC algorithm, such as CABC [10] and GABC [11]. The proposed ABCCC algorithm is also compared with ABC’s variants with CC strategy against CEC2017. The results are shown in Tables 10 and 11. Vividly, ABCCC performs best for the most test functions except 10- C01, C08, and C10 cases. The GABCCC gets the optimum for C01 and C08. For C10, CABCCC finds the final solution. When the dimension is 30, based on the average fitness values, ABCCC performs better than others.

GABCCC utilizes the global optimal value to mutate the individual in the search cycle, but lacks of diversity. Although CABCCC randomly selects dimensions and carries out mutation operations on individuals of each dimension, which also declines its convergence rate. During the test, the search cycle becomes longer and the results are not accurate enough.

6. Conclusion

During the last few decades, many evolutionary algorithms have been introduced to solve constrained optimization problems. However, it is the lack of effective constraint handling technologies in evolutionary optimization. In this research, a new updating strategy for the ABC algorithm has been introduced, which is inspired by the usefulness of the concepts constrained consensus (CC) approach. It is inevitable to make some considerable improvements of utilizing CC in evolutionary algorithms. To minimize the computational time and maintain a good diversity within the population, the new updating strategy based on CC method was applied to some of the infeasible individuals in each generation. The remaining individuals (including feasible and infeasible) were evolved by ABC method.

The new method was tested on a set of constrained problems. Experiments were compared with different variants of the approach. It also compared the performance of ABCCC with DECC and PSOCC. In most cases, the results of the algorithm show that ABCCC could obtain better results and converged faster than the other two methods. However, the CC method and some of its variants perform similarly in the test functions.

The proposed ABCCC algorithm was also compared with some famous constraint algorithms, such as CLSHADE [8] and UDE [9]. The experimental results showed that ABCCC algorithm performed well and could get the optimal solution. Furthermore, when ABCCC algorithm was compared with ABC’s variants using CC strategy, the results of our algorithm were superior to those of the other algorithms.

For future work, we will analyze the effect of different parameter values on the performance of the algorithm and try to improve the proposed algorithm further.

Data Availability

The matlab data used to support the findings of this study are available from the corresponding author upon request.

Conflicts of Interest

The authors declare that there is no conflict of interest regarding the manuscript.

Acknowledgments

This work is supported by National key Research and Development Program of China under Grants nos. 2017YFB1103604, National Natural Science Foundation of China under Grants nos. 61602343, 61806143, and 41772123, Tianjin Province Science and Technology Projects under Grants nos. 17JCQNJC04500 and 17JCYBJC15100, and Basic Scientific Research Business Funded Projects of Tianjin (2017KJ093, 2017KJ094).