Abstract

The constrained optimization problem (COP) is converted into a biobjective optimization problem first, and then a new memetic differential evolution algorithm with dynamic preference is proposed for solving the converted problem. In the memetic algorithm, the global search, which uses differential evolution (DE) as the search scheme, is guided by a novel fitness function based on achievement scalarizing function (ASF). The novel fitness function constructed by a reference point and a weighting vector adjusts preference dynamically towards different objectives during evolution, in which the reference point and weighting vector are determined adapting to the current population. In the local search procedure, simplex crossover (SPX) is used as the search engine, which concentrates on the neighborhood embraced by both the best feasible and infeasible individuals and guides the search approaching the optimal solution from both sides of the boundary of the feasible region. As a result, the search can efficiently explore and exploit the search space. Numerical experiments on 22 well-known benchmark functions are executed, and comparisons with five state-of-the-art algorithms are made. The results illustrate that the proposed algorithm is competitive with and in some cases superior to the compared ones in terms of the quality, efficiency, and the robustness of the obtained results.

1. Introduction

In many science and engineering fields, there often occurs a kind of optimization problems which are subject to different types of constraints, and they are called constrained optimization problems (COPs). Due to the presence of constraints, COPs are well known as a challenging task [1] in optimization fields. Evolutionary algorithms (EAs), inspired by nature, are a class of stochastic and population-based optimization techniques, which have been widely applied to solve global optimization problems. In the last decades, researchers applied EAs for COP and proposed a large number of constrained optimization EAs (COEAs) [24].

As is well known, the goal of COEAs is to find the optimal solution which satisfies all constraints. So constraint satisfaction is the primary requirement and the technique of constraints handling will greatly affect the performance of COEAs. The most common methods to handle constraints are penalty function based methods [4, 5], which transform a COP into an unconstrained one by adding a penalty term into the original objective function in order to penalize the infeasible solutions. The main drawback of the methods based on penalty function is to preset an appropriate penalty factor, which greatly influences the performance of these methods. However, as indicated in [2], deciding an optimal penalty factor is a very difficult problem. To avoid the use of penalty factor, Powell and Skolnick [6] suggested a fitness function, which maps feasible solutions into the interval and infeasible solutions into the interval . The method considered that the feasible solutions are always superior to infeasible ones. Inspired by Powell and Skolnick [6], Deb [3] proposed three feasibility rules for binary tournaments selection without introducing new parameters. The binary tournament selection based on the three feasibility rules is as follows: (1) any feasible solution is preferred to any infeasible one; (2) between two feasible solutions, the one with better objective value is preferred; (3) between two infeasible solutions, the one with smaller constraint violation is preferred. However, the feasibility rules always prefer feasible solutions to infeasible ones, which makes the promising infeasible solutions which carry more important information than its feasible counterparts have little opportunity to go into next population. Therefore, the diversity of the population degrades, which makes the algorithm prone to trapping into local optima.

In recent years, some researchers suggested transforming a COP into a biobjective optimization problem based on the penalty function [79]. For the converted biobjective optimization problem, if the Pareto dominance is used as the unique strategy for individuals comparison, then the objective function and constraint violation function are seen of the same significance. This will result in the situation that some solutions being far away from the feasible region but with small objective function value will be retained in the evolution process. However, these solutions have little help in searching the optimal solution of the original problem, especially for problems whose optimal solutions are inside the feasible region.

From the above analysis, it can be concluded that not only the methods which consistently prefer feasible solutions to infeasible ones are arguable, but also the methods which treat the objective function as equally important as the constraint satisfaction are ineffective. Instead, the methods that can balance the objective function and the constraint satisfaction will be effective. Nevertheless, how to balance both objectives during the evolution needs to be further considered carefully.

Runarsson and Yao [2] proposed the stochastic ranking (SR) method which used a comparison probability to balance the objective function and constraint violation. The SR based comparison prefers some good infeasible solutions with the probability , which not only makes the promising infeasible solutions have the opportunity to survive into the next population, but also enhances the diversity of the population. Wang et al. [9] proposed an adaptive tradeoff model (ATM) with evolution strategy to solve COP, which can adaptively adjust the proportion of infeasible solutions survived into the next generation. Takahama and Sakai [10] transformed a COP into an unconstrained numerical optimization problem with novel constraint-handling techniques, so called -constrained method. The method relaxes the limit to consider a solution as feasible, based on its sum of constraint violation, with the aim of using its objective function value as a comparison criterion whereas the extent of the relaxation is dynamic in the evolution.

As is indicated in [11], besides the constraints handling technique, the performance of COEAs depends on another crucial factor: the search mechanism of EAs. The current popular search algorithms involve evolution strategy (ES) [2, 9, 12], particle swarm optimization (PSO) [13], differential evolution (DE) [1417], electromagnetism-like mechanism algorithm [18], and so forth. Among these search algorithms, DE is a more recent and simple search engine. In competition on constrained real-parameter optimization at IEEE CEC 2006 Special session, among the top five best algorithms, there are three algorithms with DE as a search engine, which shows the efficiency of DE and much attention has been attracted by DE in various practical cases. Wang and Cai [19] combined multiobjective optimization with DE to solve COP, in which DE was used as a search mechanism, and Pareto dominance was used to update the evolution population. Zhang et al. [20] proposed a dynamic stochastic selection scheme based on SR [2] and combined it with the multimember DE [21]. Jia et al. [22] presented an improved DE for COPs, in which a multimember DE [21] is also used as the search engine, and an improved ATM [9] version is applied as the comparison criterion between individuals. In recent years, the combination of different DE variants and the adaptive parameters setting is popular in the algorithm designing [1, 23, 24]. In [1], the modified basic mutation strategy is combined with a new directed mutation scheme, which is based on the weighting difference vector between the best and the worst individual at a particular generation. Moreover, the scaling factor and crossover rate are also dynamically set to balance the global search and local search. Huang et al. [23] also proposed a self-adaptive DE for COPs. In the methods, the mutation strategies and the parameters settings are self-adapted during the evolution. Elsayed et al. [24] proposed an evolutionary framework that utilizes existing knowledge to make logical changes for better performance. In the proposed algorithm, 8 mutation schemes are tested on a suit of benchmark instances, and the performance of each scheme is analyzed. Then a self-adaptive multistrategy differential evolution algorithm, using the best several variants, is exhibited. The proposed algorithm divides the population into a number of subpopulations, where each subpopulation evolves with its own mutation and crossover, and also the population size, scaling factor, and crossover rate are adaptive to the search progresses. Furthermore, the SQP based local search is used to speed up the convergence of the algorithm. Moreover, DE or its mutation operator is also combined with other algorithms to improve the performance of methods [11, 25]. In [11], Runarsson and Yao improved SR [2] by combining ES with the mutation operator adopted in DE, which results in great promotion in the quality of solutions. In [25], the authors proposed biogeography-based optimization (BBO) approaches for COP, in which one version is combined with DE mutation operators, instead of the conventional BBO-based operators. The experiment results show that the version combined with DE mutation operators outperforms the other variants of BBO, which confirms the efficiency of DE operators.

In this paper, we proposed a memetic differential evolution algorithm with dynamic preference (MDEDP) to solve COP. In the proposed method, DE is used as the global search engine, and the simplex crossover (SPX) is incorporated to encourage the local search in the promising region in the search space. In the constraint-handling technique, COP is first transformed into a biobjective optimization problem with original objective function and constraint violation function as two objectives, in which the constraint violation function is constructed by constraints. Then, for the converted biobjective problem, an achievement sacralizing function (ASF) based fitness function is presented to balance the objective function and constraints violation in the evolution. The reference point and the weighting vector used in constituting the ASF are dynamically adopted to achieve a good balance between objective and constraint violation function.

The rest of this paper is organized as follows. COP transformation and some related concepts are briefly introduced in Section 2. In Section 3, DE algorithm and simplex crossover operator are briefly presented. The novel memetic DE with dynamic preference for COP is proposed in Section 4. Section 5 gives experimental results and comparisons with other state-of-the-art algorithms for 22 standard test problems. Finally, we summarize the paper in Section 6.

2.1. Constrained Optimization Problem

In general, a constrained optimization problem can be stated as follows: where is decision variable and is an -dimension rectangular search space in defined by . is an objective function, are inequality constraints, and are equality constraints. The feasible region is defined as . Solutions in are feasible solutions.

2.2. Problem Transformation and Related Concepts

Penalty function methods are the common constraints handling technique for solving COPs; however, the methods are sensitive to the penalty factor, and the tuning of penalty parameters is very difficult [2]. Reformulating a constrained optimization problem as a biobjective problem is a recently effective constraints handling method [79]. In general, the degree of constraint violation of on the th constraint is defined as where is a small positive tolerance value for equality constraints. Then reflects the degree of violation of all constraints at and also denotes the distance of to a feasible region. Obviously, .

Based on the constraint violation function , COP is converted into a biobjective optimization problem, which minimizes the original objective function and the constraint violation function simultaneously. The converted biobjective problem is as follows:

For simplicity, the two objectives of (3) are denoted by , .

Though COP is converted into a biobjective optimization problem (3), they are different essentially (see Figure 1). Biobjective problem (3) intends to find a set of representation solutions uniformly distributed on Pareto front (PF), while COP (1) is to find a solution which satisfies all constraints () and also minimizes , that is, to find the point in the intersection of PF and the feasible region.

3. Differential Evolution and Simplex Crossover Operator

3.1. Differential Evolution [26]

Differential evolution, proposed by Storn and Price [26], is a parallel direction search method and also follows the general procedure of an EA. For its simplicity and efficiency, DE has been widely employed to solve constraints optimization problem [1, 1416, 1921]. In the DE process, initial population with NP individuals is produced randomly in decision space. For each target vector at each generation, DE then employs the three operations below in turn (there are more than 10 variants of DE schemes in the related references; in this paper, the most often used DE scheme “DE/rand/1/bin” is employed in the search process and its details are as follows: where “rand” denotes that the vector to be mutated is randomly selected in the current population, “1” specifies the number of difference vectors, and “bin” denotes the binomial crossover scheme).

Mutation. A trial vector for each target vector is generated based on the current parent population via the “rand/1” mutation strategy: where , , and are mutually different integers randomly selected from and is a scaling factor which controls the amplification of the differential variation .

Crossover. To increase the diversity, the offspring vector is then generated by the binomial crossover between target vector and trial vector, in which for , where is crossover probability and is a randomly chosen index, which ensures that offspring is different from its parent in at least one component.

Selection. The offspring is compared to its parent using greedy criterion, and the one with smaller fitness will be selected into the next population.

3.2. Simplex Crossover Operator (SPX) [27]

Simplex crossover (SPX) is one of the most common used recombination operators in EAs, which generates offspring based on uniform distribution. The search region of SPX is adaptively adjusted with the evolution, which ensures that SPX has the good ability of exploration in the earlier stages and the good ability of exploitation in the later stages of evolution. Moreover, SPX is very simple and easy to realize.

In , mutually independent parent vectors form a simplex. Offspring is generated through the following two steps: expand the original simplex along each direction by times to form a new simplex, where is the center of original simplex, that is, and the vertexes of the new simplex are ,   and choose a point uniformly from the new simplex as the offspring, that is, offspring , where is a random number in and satisfies the condition .

In general, the SPX is specified as SPX---, where is the dimension of search space, is the number of parents to constitute the simplex which is selected in , and is a control parameter that defines the expanding rate. In the above procedure of producing offspring, the number of parents is set to , and the crossover scheme SPX is denoted by SPX---.

4. Memetic DE Based on Dynamic Preference for Solving COPs

4.1. Novel Fitness Function Based on Dynamic Preference

Pareto dominance is the most often used rule in multiobjective optimization to sort individuals. However, if two individuals are not dominated by each other, the better one of them cannot be determined. To solve the converted biobjective problem (3), we intend to find the solution which not only satisfies the constraints (), but also optimizes the original objective function ; therefore, problem (3) is substantially a biobjective problem with preference. Among methods in solving multiobjective problem, weighted metrics method [28] scalarizes the multiobjectives into a single objective by the weighting distance from individuals to a reference point. One of the single objective functions obtained by scalarizing is called achievement scalarizing function (ASF) [28], which implies the preference to different objectives via different weighting vectors and reference points. For general biobjective optimization problems, ASF is defined as follows: where is a reference point (it may be inside the objective space or not), which determines the preferred region on Pareto front (curve between A and B on PF), , satisfying and , is a weighting vector, which points to the certain Pareto optimal point in the preferred region (see Figure 2). From Figure 2, is the -weighted distance from individual to the reference point , and the value , realizes the extent of preference to the two objectives and . means that ASF (4) prefers the Pareto optimal solution with small between A and B, and conversely, it prefers the solution with small . The advantage of ASF is that arbitrary (weakly) Pareto optimal solution can be obtained by moving the reference point only. Moreover, the different (weakly) Pareto optimal solutions between A and B can be found by various weighting vectors [28].

For the converted biobjective problem, the evolution procedure usually experiences three stages. In the first stage, there is no feasible solution in the population; therefore, the individuals with small constraint violations should be selected in priority to help the population be close to the feasible region. With the population evolution, some individuals go into the feasible region. In this stage, the feasible individuals are preferred, and some nondominated infeasible individuals with small constraints violation are also maintained to increase the diversity. In the third stage, there are only feasible individuals in population, and the feasible individuals with small objective function value are preferred. According to the characteristics of ASF and the above analysis, in different stages of evolution, the proper reference point and weighting vector should be chosen adaptively to construct the fitness function and realize the preference to different objectives.

In the first stage, there are only infeasible solutions in the population; that is, (where is the proportion of feasible solutions in population). Therefore, constraint satisfaction is more important than objective minimization in the individuals comparison. However, if constraint satisfaction is the only condition considered, dominated individuals with small constraint violations and large objective function value will also access into the next population with priority, and these individuals have little effect on finding the optimal solution. On the other hand, the individuals with smaller objective function value and slightly bigger constraint violations would be neglected. Furthermore, some algorithms adopted Pareto ranking which put the same importance on all objectives and assigned the same best fitness value to all the Pareto optimal solutions in population first. But, for some problems (e.g., problems whose optimal solution is inside the feasible region), infeasible individuals which are far away from the boundaries of the feasible region have little effect on searching optimal solution. Thus, in this stage, the small constraint violation should be guaranteed in priority, and meanwhile the objective function value should be guaranteed not too large. In the absence of feasible solution in the population, the nondominated infeasible solution with the smallest constraint violation (it is also called the best infeasible solution) is the best solution, and it is chosen as the reference point . Moreover, in this stage, the constraint satisfaction is a key issue; thus, the weighting vector can be set to . This choice of the reference point and the weighting vector demonstrates that the smaller the weighting distance from an individual to the reference point, the better the individual. Figure 3(a) illustrates the fitness value of individuals in population.

In the second stage, there are both feasible and infeasible solutions in the population; that is, . In this stage, the evolution should adjust preference according to the proportion of feasibility in current population. If the proportion is low, it is possible that the evolution is in the early time of the search, or the feasible region is very small compared to the search space and it is difficult to obtain feasible individuals in the evolution. Therefore, the feasible individuals are preferred. If the proportion is high, some nondominated infeasible individuals with small constraint violations are more useful than some feasible individuals with large objective function value, especially for the problem whose optimal solution is located exactly on or nearby the boundaries of the feasible region. From the above analysis, the reference point is determined as the best feasible individual, and the weighting vector is determined according to the proportion . If the proportion is small, the evolution should prefer the individuals with small value of (i.e., individuals which are feasible or with small constraint violations). Conversely, if is big, the evolution should prefer the individuals with small value of . Because the original COP is to find the optimal solution which satisfies all constraints, then the constraint satisfaction is primary, so the preference to the objective function cannot be too large, and the maximum of the preference weighting value to the objective function is set to 0.5; that is, . The weighting vector can be determined as , and . Figures 3(b) and 3(c) illustrate the preference in this stage.

In the third stage, there are only feasible solutions in the population; that is, . It is obvious that the comparison of individuals is based on their objective function values. Actually, the criterion is the special case of formula (4), in which is the best feasible solution and weighting vector .

In the comparison among individuals, is used as the fitness function. The smaller the value of , the better the individual . Fitness function is the weighted distance from individual to the reference point which is the best in the population. Obviously, the fitness is nonnegative and has the minimal value at the best individual; therefore, the elitist strategy is taken to prevent the population from degenerating. To avoid the biases in caused by the magnitude of and , both of them are normalized firstly. For simplicity, we still use and to denote the normalized objective functions and where , , , and are the maximum and minimum values of and , respectively. Besides the minimum of is known as 0, and the other three values are unknown. Therefore, ,  , and are updated by the maximum and minimum values of and in the population. Moreover, in the normalization, we do not use the minimum value of in the population because if there is no feasible solution, the infeasible solution with smallest constraint violation will be normalized to feasible one.

4.2. Local Search Based on SPX

In this paper, DE/rand/1/bin is used as the global search engine, which has good ability of exploration in the early phases, and can explore the search space fully. However, its exploitation ability is weak in the early stage, which results in slow convergence. To balance the exploration and exploitation in optimizing process, local search (LS) based on simplex crossover (SPX) is employed. For many COPs in practice, especially those with equality constraints, the optimal solution is situated on the boundary of the feasible region or nearby. Thus searching from both sides of the boundary of the feasible region is effective. From Figure 1, one can conclude that the solutions in the lower-left part of the objective space, which are feasible solutions with small objective values or nondominated infeasible ones with small constraint violation, are promising. Thus, the neighborhood embraced by the promising solutions and nearby deserves more computing resources. Therefore, the LS will be efficient when focusing on the promising region. Besides the neighborhood the LS is concentrating on, other issues influencing the LS are the solutions on which the LS engine will work and the number of solutions used to participate the LS. From the conclusion in [27], SPX with a large number of parents, , has sampling biases that reflect biases in the population distribution too much. So a medium number of parents are a good choice to overcome the oversampling biases. Furthermore, [27] concluded that SPX works well with on a low dimensional function and on high dimensional functions. Based on the conclusion, in the proposed SPX based local search, the number of parents, , is set to 3. Furthermore, the three parents selected to perform SPX in LS are demonstrated in Figure 4.

In the first stage, there is no feasible solution in the population; that is, . The nondominated solutions with small constraint violation are located in the lower-left part in the objective space; therefore, the top three nondominated solutions with small constraint violation are selected as parents , , and to perform SPX (Figure 4(a)).

In the second stage, there are feasible and infeasible solutions simultaneously in the population; that is, . If there is only one feasible solution, it is the first selected parent . And then, the solutions which are nondominated with it are checked. If there is no nondominated solution, then the top two solutions with small constraint violation are selected as the second parent and the third parent and (Figure 4(b)); otherwise, the nondominated solution with the smallest constraint violation is selected as the second parent , and then the infeasible solution with smallest constraint violation in the population excluding is selected as the third parent (Figure 4(c)). If there is more than one feasible solution, the top two feasible solutions with small objective value are selected as the parents and . Moreover, if there is no solution which is nondominated with the best feasible solution, then the infeasible solution with the smallest constraint violation is selected as the third parent (Figure 4(d)); otherwise, the nondominated one with smallest constraint violation is the third parent (Figure 4(e)).

In the last stage, all solutions in the population are feasible, that is, . It is obvious that the top three solutions with small objective value are selected as parents , , and .

In the above LS procedure, the best feasible and infeasible solutions are selected to form the simplex which crosses the feasible and infeasible region in the search space. Furthermore, the neighborhood that the LS engine works on is dynamically adjusted with the evolution and the search will focus on the promising region in search space, and the strategy encourages exploitation adaptively from both sides of the boundary of the feasible region.

4.3. Memetic DE Based on Dynamic Preference for COPs

For biobjective problem (3) with preference, the proposed EA, denoted by MDEDP, is described as follows.

Algorithm 1. Consider the following.

Step 1 (initialization). Randomly generate the initial population with size in the search space , and set .

Step 2 (global search). Apply DE/rand/1/bin as the global search engine to evolve the population , and the offspring set is denoted by .

Step 3 (local search). Perform the local search on the promising region according to the details in Section 3.2. The LS based on SPX generates offspring, which are randomly selected from the enlarged simplex. The offspring constitute set .

Step 4 (selection). Select the next population from according to the fitness function value in Section 3.1.

Step 5 (stopping criterion). If stopping criterion is met (usually the maximum number of function evaluations is used), stop; else let , and go to Step 2.

5. Simulation Results

5.1. Test Functions and Parameters Setting

In this section, we apply the proposed MDEDP to 22 standard test functions from [1, 2, 24]. The parameters used in simulations are given below. Population size, , the scale factor in DE mutation, and the probability of crossover are uniformly random number in and , respectively. In the SPX based LS, the expanding ratio is , and the number of offspring is . The maximum number of function evaluations (FEs) is set to 240000. In the constraint handling technique, the equality constraints are converted to inequality constrains by a tolerance value . In the experiments, the tolerance value is dynamically set as proposed in [29] and used in the [9, 12]. The tolerance value is The initial is set to 3. The initial value and the proportional decreasing factor are the same as those in [9]. For each test function, 30 independent trials are executed, and the statistical results of 30 trials are recorded. The statistical results include the “best,” “median,” “mean,” and “worst” of the obtained optimal solutions and the standard deviations (Std.).

5.2. Experimental Results and Comparison

The experimental results are compared with the state-of-the-art algorithms: Var2 [24] and SAMSDE-2 [24], SR [2], ATMES [9], and CBBO-DM [25]. Among the comparison algorithms, Var2 [24] used DE/rand/3/bin as search engine and used feasibility rules [3] as comparison criteria, ASMSDE-2 combines adaptively two DE mutation schemes as search engine and also used feasibility rules [3] as comparison criteria, SR employed ES as the search engine and designed stochastic ranking to make comparison, ATMES used ES as the search engine and proposed an adaptive tradeoff fitness as selection operator, and CBBO-DM combines the BBO and DE mutation operators together. It is mentioned here that our algorithms MDEDP, ATMES, Var2 [24], and SAMSDE-2 [24] used 240000 FEs, while SR and CBBO-DM used 350000 FEs. The tolerance of equality constraints for Var2 [24] and SAMSDE-2 [24] is , while it was set to for CBBO-DM, and for MDEDP and ATMES, both tolerance values are dynamically decreased. For MDEDP, it is decreased from initial value 3 to eventually, while for ATMES, it is decreased from initial value of 3 to in the end.

Table 1 gives the results of six algorithms, our MDEDP, Var2 [24], SAMSDE-2 [24], SR [2], ATMES [9], and CBBO-DM [25] for the first 13 test instances since SR, ATMES, and CBBO-DM solved only the first 13 problems. The results and comparisons with Var2 [24] and SAMSDE-2 [24] for the remaining nine test instances are given in Table 2.

In order to illustrate the significant difference, the approximate two-sample -tests between the comparison algorithms and our MDEDP have been done according to [18]: where and denote the mean values, and are the standard deviations of the results obtained by the two algorithms, and and are the independent runs of two algorithms, respectively. The value of degrees of freedom is calculated as follows:

For the first 13 test instances, the -tests are done between our MDEDP and three algorithms, SR, ATMES, and CBBO-DM, on 6 instances (g02, g05, g07, g10, and g13). The -test is not done on other instances since the optimal solutions are found by at least three algorithms in all runs. The data used for -test is from the related references and the results of -tests are calculated and showed in Table 3. In the table, “NA” means no available data in the related references, “−” means both approaches have obtained the optimal solutions in all runs on the given functions, and “NF” means no feasible solutions found in the algorithm. Superscript “a” means the difference between MDEDP and the compared algorithm with the corresponding degrees of freedom is significant at and MDEDP is better than the compared algorithm, while superscript “b” means that the MDEDP is worse than the compared algorithm.

As Table 3 shows, most of the differences are significant except the differences between ATMES and MDEDP on problem g13, which illustrates that our MDEDP is competitive compared with other algorithms for the most widely used 13 benchmark problems.

For all the 22 test instances, the -tests are done between our MDEDP and Var2, SAMSDE-2 on 12 instances (g02, g05, g07, g09, g10, g13, g14, g17, g18, g19, g21, and g23). The -test is not done on other instances since the optimal solutions are found by at least two algorithms in all runs. The results of -test are showed in Table 4, in which it is clear that most of the differences are significant except the differences between either one of the compared algorithms and MDEDP on problem g05. Therefore, it is concluded that our MDEDP is competitive with the other two state-of-the-art algorithms for the 22 test instances.

In summary, compared with other five algorithms for constrained optimization problems, MDEDP is very competitive.

In the proposed MDEDP, SPX based local search is incorporated to speed up the convergence of the algorithm. To test the efficiency of the SPX based local search, the comparisons are done on the most widely used 13 test instances between the proposed algorithm with local search and without local search (the algorithm without local search is denoted by DEDP). The parameters setting is the same as MDEDP with the exception of discarding the SPX based local search. It is mentioned here that the proposed algorithm without local search also performed 240000 FEs, which is the same as that of MDEDP. The results for DEDP are also showed in the last column of the Table 1. From Table 1, it is clear that DEDP can find nine optimal solutions (g03, g04, g05, g06, g08, g09, g11, g12, and g13) among 13 test instances, which shows that the used DE search engine is powerful and the proposed fitness function based on ASF can achieve a good balance between objective function and constraint violation. However, the DEDP is poor on the other four test instances which have difficulties of high dimension of variables, many local optimal solutions, and so forth. Compared with DEDP, MDEDP performs clearly better than that of the DEDP, which illustrates that the SPX based local search can obviously improve the efficiency of the DEDP.

5.3. Effect of the Weighting Vector

In this subsection, the effect of the weighting vector which reflects the preference is discussed through various experiments on the most widely used 13 test instances.

In stage one, that is, , there is no feasible solution in the population, and the weighting vector is set to (0.1, 0.9). A big component of illustrates that, in this stage, the search biases to the constraint satisfaction are much more. Meanwhile, that is not set to zero means that the objective function should not be ignored absolutely.

In further experiments, the weighting vector is set to (0.2, 0.8), (0.3, 0.7), (0.4, 0.6), and (0.5, 0.5), and the results are shown in Table 5. The results are only for 10 out of the most widely used 13 test functions except for g02, g04, and g12 since the feasible regions for g02, g04, and g12 are relatively large which makes the populations seldom or not experience phase one and the value of almost does affect the evolution. For convenient comparison, the results with are also shown in Table 5. From Table 5, it can be seen that, with the four different weighting vectors, the proposed algorithm can converge consistently to the global optimum in 30 independent runs for the 10 test functions which all experience phase one at least once. Therefore, the proposed algorithm is not sensitive to the preset parameter vector in stage one. The results and the above analysis demonstrated that MDEDP is robust.

In stage two, that is, , there are feasible and infeasible solutions in the population at the same time. With the evolution, the feasible solution proportion increases accordingly. In this phase, parameter , the upper bound of the first component of , is used to limit the bias to the objective function in evolution. In the numerical experiments, . The given upper bound means that the preference to the objective function is at most the same to the constraint satisfaction, which is the primary goal. Further experiments are performed on all test functions with different upper bound values, 0.4, 0.45, 0.6, and 0.7. The upper bound values of 0.4 and 0.45 represent that the preference to objective function is slightly less than that to the constraint satisfaction when there are more feasible solutions in the population, while upper bound values of 0.6 and 0.7 represent that the preference to objective function is slightly more than that to the constraint satisfaction. The experimental results are shown in Table 6. It can be seen from the results in Table 6 that MDEDP can obtain similar results with different upper bound values of the preference to objective function. From the final results to g02, we can see that the algorithm is the most robust with the upper bound 0.7. This is because the whole search space is almost feasible, and the search biasing to the objective function is helpful to find optimal solution and coincides with a larger value of the first component of the weight vector. The obtained results indicated that the different upper bounds have little effect on the efficiency and robustness of the proposed algorithm.

From the above experimental results, we can conclude that the proposed MDEDP is robust to find the optimal solutions for the test functions with the different preset parameters.

6. Conclusion

The constrained optimization problem is converted into a biobjective optimization problem first and then solved by a new designed memetic differential evolution algorithm with dynamic preference. DE is used as the search engine in global search, which is guided by a novel fitness function based on ASF used in multiobjective optimization. The fitness function ASF dynamically adjusted the preference to different objectives through the adaptive reference point and weighting vector. In local search, SPX is used as the search mechanism and concentrates on the neighborhood the best feasible and infeasible solutions constituted, which makes the algorithm approach the promising region in objective space and accelerates the convergence. Experimental results and comparison with the state-of-the-art algorithms demonstrate that the proposed algorithm is efficient on these standard test problems. Further experiments with different preset parameters also show the robustness of the proposed algorithm.

Conflict of Interests

The authors declare that there is no conflict of interests regarding the publication of this paper.

Acknowledgment

This work is supported by the National Natural Science Foundation of China (no. 61272119).