Review Article  Open Access
A Review of ConstraintHandling Techniques for Evolution Strategies
Abstract
Evolution strategies are successful global optimization methods. In many practical numerical problems constraints are not explicitly given. Evolution strategies have to incorporate techniques to optimize in restricted solution spaces. Famous constrainthandling techniques are penalty and multiobjective approaches. Past work has shown that in particular an illconditioned alignment between the coordinate system of Gaussian mutation and the constraint boundaries leads to premature convergence. Covariance matrix adaptation evolution strategies offer a solution to this alignment problem. Last, metamodeling of the constraint boundary leads to significant savings of constraint function calls and to a speedup by repairing infeasible solutions. This work gives a brief overview over constrainthandling methods for evolution strategies by demonstrating the approaches experimentally on two exemplary constrained problems.
1. Introduction
Many continuous optimization problems in practical applications are subject to constraints [1]. Constraints can make an easy problem hard and hard problems even harder. Surprisingly, in the past only little research efforts have been devoted to the development of efficient and effective constrainthandling techniques—in contrast to the energy invested in the development of new methods for unconstrained optimization. This observation also holds true in the field of evolutionary computation. This paper is devoted to constrainthandling techniques that have been developed, in particular for evolution strategies. It summarizes our line of research of the last years in the field of constrainthandling and premature stepsize reduction [2–7]. In realvalued solution spaces a constrained problem can be hard to solve due to a coordinate system alignment problem that leads to premature fitness stagnation. We review not only various general approaches like penalty functions, but also specialized approaches that have been developed to solve coordinate alignment problems, by summarizing each constrainthandling method, stating experimental results on two exemplary test functions and discussing advantages and disadvantages.
The remainder of this section gives a brief introduction to evolution strategies, constrained problems, and a taxonomy of constrainthandling techniques. Section 2 introduces three examples from the famous family of penalty functions. A bioinspired multiobjective approach is reviewed in Section 3. The methods that concentrate on coordinate system alignment are presented in Section 4, while Section 5 is devoted to metamodeling of the constraint boundary.
1.1. Evolution Strategies
Evolution strategies (ES) are a family of strong stochastic methods for global optimization. Developed by Rechenberg [8] and Schwefel [9], they have become famous for global numerical optimization, that is, nonconvex optimization in . In each iteration offspring solutions are produced and the best are selected as parents for the following generation. An important basis of ES is the selfadaptive Gaussian mutation operator that we briefly repeat in this context. An individual of a ES with the dimensional objective variable vector is mutated in the following way:
while delivers a Gaussian distributed number. The strategy parameter vector undergoes mutation—a typical variation of —with lognormal mutation:
as crossover operator arithmetic recombination is applied in most cases. For a detailed introduction to ES we recommend the introduction by Beyer and Schwefel [10] or the introductory chapter to ES in Eiben's book [11].
1.2. Constrained Problems
In the field of evolutionary computation the constraints typically are not considered available in their explicit formal form. Rather, the constraints are assumed to be black boxes: a vector fed to the black box just returns a numerical or boolean value. If there is a numerical response, then the information about a positive value can be used to assess the distance to feasible solutions. A number of constrainthandling methods exploit this information. In general, the constrained continuous nonlinear programming problem is defined as follows: find a solution in the dimensional solution space that minimizes the objective function , in symbols as:
A feasible solution satisfies all inequality and equality constraints. A feasible solution that minimizes is termed as an optimal solution. If for some inequality constraint at an optimal solution , then the constraint is said to be active. We assume that the evaluations of the constraint functions are computationally expensive and that the return values are boolean and provide the information of whether the solution is feasible or not. In order to be able to develop more advanced constrainthandling techniques, for example, repair or feasibility check approaches, metamodels of the constraint function can be built with certain assumptions, that is, to be linear, quadratic, and so forth.
The two following test functions excellently demonstrate the phenomenon of premature fitness stagnation that will be discussed in the following sections and that is a challenge for most constrainthandling techniques. The two functions will be used for the discussion of the methods reviewed in the current paper. Problem 2.40—taken from Schwefel's artificial test problems [10]— exhibits a linear objective function and an optimum with five active linear constraints. The problem is to minimize
subject to
with minimum and .
The second problem is called tangent problem (TR). It is based on the sphere model subject to one linear constraint: with and . The success rates on TR get worse when approximating the optimum. In this paper we will focus on the TR problem with dimensions, denoted as TR2.
1.3. A Brief Taxonomy of ConstraintHandling Methods
A variety of constrainthandling methods for evolutionary algorithms have been developed in the last decades. Most of them can be classified into five main types of concepts.
() Penalty functions decrease the fitness of infeasible solutions by taking the number of infeasible constraints or the distance to feasibility into account [12–16]. The history of penalty functions began with the sequential unconstrained minimization technique by Fiacco and McCormick [13] in which the constrained problem is solved by a sequence of unconstrained optimizations. The penalty factors are stepwise intensified. In similar approaches penalty factors can be defined statically [14] or depending on the number of satisfied constraints [16]. They can dynamically depend on the number of generations as Joines and Houck propose [15]: at generation , parameters and are user defined; typical settings are , , or . is a measure for the constraint violation. A frequent definition is with factors and . Penalties can be adapted according to an external cooling scheme [15] or by adaptive heuristics [12]. In the death penalty approach [5] infeasible solutions are rejected and new solutions are created until enough feasible ones exist. In the segregated genetic algorithm by Riche et al. [17] two penalty functions, a weak one and an intense one, are calculated in order to surround the optimum. In the coevolutionary penalty function approach by Coello Coello [18] the penalty factors of an inner evolutionary algorithm are adapted by an outer evolutionary algorithm. Some methods are based on the assumption that any feasible solution is better than any infeasible one [19, 20]. Examples are the metric penalty functions by Hoffmeister and Sprave [21]. Feasible solutions are compared using the objective function while infeasible solutions are compared considering the satisfaction of constraints. Similar is the approach by Oyman et al. [22]. Their fitness function depends on the parent and children population at every generation and, therefore, becomes a dynamic approach. In his approach called stochasticranking Runarsson [23] he uses metamodels to predict both fitness functions values and penalties based on constraint violations. From this point of view the approach is related to methods that are based on metamodeling the constraint boundary.
() Repair algorithms either replace infeasible solutions or only use the repaired solutions for evaluation of their infeasible pendants [24, 25]. This class of algorithms can also be seen as local search methods that reduce the constraint violation. The repair algorithm generates a feasible solution from an infeasible one. In the Baldwinian case, the fitness of the repaired solution replaces the fitness of the original solution. In the Lamarckian case, the feasible solution overwrites the infeasible one. In general, defining a repair algorithm can be as complex as solving the problem itself.
() Decoder functions map genotypes to phenotypes which are guaranteed to be feasible. Decoders build up a relationship between the constrained solution space and an artificial solution space easier to handle [25–27]. They map a genotype into a feasible phenotype. By this means even quite different genotypes may be mapped onto the same phenotype. Eiben and Smith [11] define decoders as a class of mappings from the genotype space to the feasible regions of the solution space with the following properties: every must map to a single solution , every solution must have at least one representation , and every must have the same number of representations in (this need not be one).
() Feasibility preserving representations and operators force candidate solutions to be feasible [28, 29]. A famous example is the GENOCOP algorithm [27] that reduces the problem to convex search spaces and linear constraints. A predatorprey approach to handle constraints is proposed by Paredis [28] using two separate populations. Schoenauer and Michalewicz [29] propose special operators that are designed to search regions in the vicinity of active constraints. A comprehensive overview to decoderbased constrainthandling techniques is given by Coello [25] and also by Michalewicz and Fogel [27].
() Multiobjective optimization techniques are based on the idea of handling each constraint as an objective [30–35]. Under this assumption many multiobjective optimization methods can be applied. Such approaches were used by Parmee and Purchase [34], Jimenez and Verdegay [32], Coello Coello [31], and Surry et al. [36]. In the behavioral memorymethod by Schoenauer and Xanthakis [35] the EA concentrates on minimizing the constraint violation of each constraint in a certain order and optimizing the objective function in the last step.
Of course, constrainthandling methods exist that do not fit into the taxonomy. Montes and Coello Coello [37] introduced a technique based on a multimembered ES with a feasibility comparison mechanism. The constrained differential evolution approach by Takahama and Sakai [38] combines the usage of an for equality constraints with differential evolution. The dynamic multiswarm particle optimizer by Liang and Suganthan [39] makes use of a set of subswarms concentrating on different constraints. It is combined with sequential quadratic programming as a local search method. The approach of MezuraMontes et al. [40] combines differential evolution, different mutation operators to increase the probability of producing better offspring, three selection criteria, and a diversity mechanism. MezuraMontes [41] approach gives a survey of constrainthandling methods for evolutionary algorithms.
In the following section we will compare various approaches from different fields and compare them, in particular with regard to the mentioned premature stepsize problem. The next section shows this problem experimentally.
2. Penalty Methods
Evolutionary search is guided by the quality of its candidate solutions. Consequently, an obvious solution to constrainthandling is to deteriorate the fitness of infeasible methods [11, 25]. Here we review three penalty functions exemplarily. Death penalty is the simplest way, but wastes comparably many constraint function calls. Paragraph 2.2 is a typical penalty technique where the solutions are penalized with regard to the progress of the search. The death penalty step control approach that prevents premature stepsize reduction is reviewed in Section 2.3.
2.1. Death Penalty
First of all, we will analyze the behavior of death penalty, that is, simply discarding infeasible offspring solutions [42, 43]. This is the first time we can observe premature fitness stagnation. Table 1 shows the corresponding results of a (15,100)ES with the following settings on problems 2.40 and TR2. We use the mutation introduced in Section 1.1 with settings and and arithmetic recombination with randomly chosen parents. All experiments in this article make use of the same experimental settings unless stated explicitly. The termination condition is fitness stagnation: the algorithms terminate if the fitness win from generation to generation falls below . In this case the magnitude of the step sizes is too small to effect further improvements. Parameters best, mean, and dev describe the achieved fitness (difference between the optimal fitness and the fitness of the best solution ) of 25 experimental runs while ffe counts the average number of fitness function evaluations and cfe of constraint function evaluations, respectively. The results show that death penalty is not able to approximate the optimum of the problem satisfactorily. The relatively highstandard deviations dev show that the algorithms produce unsatisfactorily different results.

We can summarize the behavior of death penalty mentioning the advantage that death penalty is easy to implement. The disadvantages are that death penalty is inefficient as many infeasible tries are wasted, and it suffers from premature convergence. The following methods aim at preventing premature convergence.
2.2. Dynamic Penalty Functions
The question arises whether dynamic penalty functions also suffer from premature convergence. To answer this question we tested the penalty function by Joines and Houck [15] that is based on adding a penalty on infeasible solutions
with parameters , and the constraint violation . The penalty depends on the number of iterations and decreases in the course of time. Table 2 shows the experimental analysis of the penalty function on 2.40 and TR2 with and . Again, the algorithm is based on a (15,100)ES with the same settings like in the last paragraph 2.1. The algorithm stops earlier, but the results are even worse and show that premature fitness stagnation occurs, too. The reason is quite obvious: the success rate in the vicinity of the infeasible search space remains small because of the penalty—no matter whether caused by discarding or penalizing. Consequently, we can summarize as follows: dynamic penalty functions are easy to implement, and no feasible starting point is required. But the disadvantages are that dynamic penalty functions suffer from premature convergence. Related work on penalty functions can be found in [12–16].

2.3. Death Penalty Step Control
The most obvious modification to prevent premature stepsize reduction is the introduction of a minimum stepsize for the mutation strengths with :
This is exactly what the death penalty step control evolution strategy (DSES) is aiming at [5]. Nevertheless, a lower bound on the step sizes will also prevent an unlimited approximation of the optimum when reaching the range of . Consequently, the DSES makes use of a control mechanism to reduce during convergence to the optimal solution. The reduction process depends on the number of infeasible mutations produced when reaching the area of the optimum at the boundary of the feasible solution space. The reduction process of depends on the number of rejected infeasible solutions: in every infeasible trial, is reduced by a factor according to the equation
The DSES is denoted by [; ]DSES. Again, we show the behavior of the constrainthandling method on problem TR2 and 2.40; see Table 3. The method is able to approximate the optimum of problem 2.40 with comparably few fitness function evaluations, but a waste of constraint function evaluations. Intuitively, the five active linear constraints of problem 2.40 cause many infeasible samples, so does the stepsizes reduction mechanism. On harder problems like TR2 the low success rates still prevent an arbitrarily exact approximation of the optimal solution. The success of the DSES depends on a proper reduction speed, that is, proper parameter settings for and . Too fast reduction results in premature convergence; too slow reduction is inefficient. Further experiments on other test functions confirm this picture.

Again, we summarize the following results: death penalty step control is easy to implement, and shows an improvement of the approximation of optima with active constraints. But the disadvantages are that death penalty step control consumes many constraint function evaluations, its success depends on proper parameter settings, and on some problems it may still suffer from low success rates. A more detailed experimental analysis of the DSES can be found in [4, 5].
3. A Multiobjective Bioinspired Approach
A familiar variant to handle constraints is to treat each constraint—or an aggregated sum of all constraints—and the objective function as separate objectives in a multiobjective formulation. Similar approaches have been introduced in the past [30–35]. Here we review a similar constrainthandling technique that treats the fulfillment of constraints and the optimization of the objective function as separate objectives that are optimized using a population specific selection scheme. The bioinspired concept offers an answer to the problem of low success rates: our twosex evolution strategy (Kramer and Schwefel [5]) allows candidate solutions to cross the constraint boundary. The mechanism to enforce the approach of the optimum stems from nature. Individuals of different sex are selected by different criteria and nature allows pairing only between individuals of different sex. Transferring this principle to constrainthandling means: Every individual of the two sexes evolution strategy (TSES) is assigned to a feature called sex. Similar to nature, individuals with different sexes are selected according to different criteria. Individuals with sex are selected by the objective function. Individuals with sex are selected by the fulfillment of constraints. The intermediary recombination operator plays a key role. Recombination is only allowed between parents of different sex. A few modifications are necessary to prevent an explosion of the step size, that is, a twostep selection operator for individuals of sex similar to the operator by Hoffmeister and Sprave [21]. For a list of TSES variants and modifications we refer to [5]. The populations are noted as —the index determines the sex, that is, for objective function and for constraints.
Table 4 shows the experimental results of the TSES on problems TR2 and 2.40. While death penalty completely fails on problem 2.40, the (8 8, 13 87)TSES reaches the optimum in every run. Now, a better approximation of the harder problem TR2 is possible. Nevertheless, the approximation quality may still be improved and an analysis on further test problems—that can be found in [4]—shows that the TSES is successful on many constrained problems, but not on all. Fortunately, the TSES is quite robust to the chosen population ratios.

We can summarize that the twosex evolution strategy improves the approximation of optima with active constraints, allows infeasible starting points, saves constraint function evaluations, for example, in comparison to the DSES, and is quite robust to parameter changes. But the disadvantages are that the twosex evolution strategy still consumes many fitness function evaluations; on some problems it may still suffer from low success rates, for example, on TR2.
4. Coordinate Alignment Techniques
In realvalued optimization the coordinate system plays an important role. If the coordinate system of the mutation operators, for example, of Gaussian mutation, is not aligned to the coordinate system of the objective function—and this is frequently the case in blackbox optimization—undesirable effects may occur like premature stepsize reduction.
4.1. Premature StepSize Reduction
The phenomenon of premature stepsize reduction at the constraint boundary has been analyzed in [2]—in particular for the condition that the optimum lies on the constraint boundary or even in a vertex of the feasible search space. In such cases the evolutionary algorithm frequently suffers from low success probabilities near the constraint boundaries. Under simple conditions, that is, a linear objective function, linear constraints, and a comparably simple mutation operator, the occurrence of premature convergence due to a premature decrease of step sizes was proven. Figure 1 illustrates the reason for premature step size reduction. We assume the simplified case in which mutations are produced on the boundary of the circles. in case of large mutation strengths () with the region of success, that is, the marked part of the circles, is smaller in comparison to the whole circle than in the case that the circle is not cut by the constraint boundary (). Consequently, the probability to produce successful mutations is higher for small step sizes and these mutations are favored during optimization. This is a coordinate system alignment problem: In case of independent step sizes and coordinate rotation the mutation circle can adapt to a mutation ellipsoid whose region of success is not restricted by the constraint boundary.
Arnold and Brauer [44] analyzed the behavior at the boundary of linear constraints and models the distance between the search point and the constraint plane with a Markov chain. Furthermore, they discuss the working of step length adaptation mechanisms based on success probabilities.
4.2. Biased Mutation
The shape of the standard mutation ellipsoid is Gaussian. The best modification to improve the success rate situation would be a more flexible mutation distribution function. Later, we will see that a rotation of the mutation ellipsoid is a reasonable undertaking. But is a deformation also an adequate solution to low success rates? Biased mutation aims at biasing the mean of the Gaussian distribution into beneficial directions selfadaptively [7]. A selfadaptive bias coefficient vector determines the direction of this bias and augments the degree of freedom of the mutation operator. This additional degree of freedom improves the success rate of reproducing superior offspring. The mutation operator adapts the bias direction within the interval (for left) and (for right) in each of the dimensions:
This relative direction must be translated into an absolute bias vector. For this sake the step sizes can be used. For every the bias vector is defined by
Since the absolute value of bias coefficient is less than or equal to 1, the bias will be bound to the step sizes . This restriction prevents the search from being biased too far away from the parent. Hence, the biased mutation works as follows:
To allow selfadaptation, the bias coefficients are mutated in the following metaEP way:
with parameter determining the mutation strength on the bias. The biased mutation operator (BMO) biases the mean of mutation and enables the ES to reproduce offspring outside the standard mutation ellipsoid. To direct the search, the biased mutation enables the center of the ellipsoid to move within the bounds of the regular step sizes . An adaptive variant of the originally selfadaptive biased mutation is the descent mutation operator. It estimates the descent direction of two population's centers and of successive generations. Let be the center of the population at generation :
The normalized descent direction of two successive population centers and is
Similar to the BMO, the descent mutation operator (DMO) now becomes
The DMO is reasonable as long as the assumption of locality is true: the estimated direction of the global optimum can be derived from local information, that is, the descent direction of two successive populations' centers. Again, we analyze both biased mutation operators on the test problems 2.40 and TR2 and show the results in Table 5. For the sake of adaptation of the bias an increase of offspring individuals to is necessary. The bias mutation parameter is set to the standard setting . Our experiments show that the BMO and the DMO are both able to improve the results on problem 2.40. The experiments reveal that the mutation distribution deformation improves the success rate—intuitively by shifting the center of the mutation ellipsoid so that the latter is not cut off by the infeasible solution space. But the results show that the harder problem TR2 is still not easy to approximate.

We can conclude that biased mutation improves the approximation of optima with active constraints. Descent biased mutation is comparatively efficient, in particular more efficient than the BMO. But the disadvantages are that biased mutation consumes many fitness and constraint function evaluations, and on some problems it may still suffer from low success rates.
4.3. Mutation Ellipsoid Rotation
Correlated mutation by Schwefel [45] rotates the axes of the hyperellipsoid to adapt to local properties of the fitness landscape. Three ways are possible to rotate the mutation ellipsoid with the help of possible rotation angles:
(1)a selfadaptive rotation—in this case the rotation angles become strategy parameters and the algorithm has to tune itself,(2)a rotation with the help of a coevolutionary approach, (3)with a metamodel of the constraint boundary that delivers the orientation of the constraint boundary.Table 6 shows the experimental results of selfadaptive correlated mutation (SAES), a metaevolutionary approach ((3,15(3,15))MAES) [5], and correlated mutation using the metamodel estimator (MMES) with and binary search steps. Correlated mutations make use of additional strategy parameters, that is, angles for the rotation of the hyperellipsoid. The selfadaptation process of the SAES fails to adapt the angles automatically. The parameter space of step sizes and angles is too large to adapt successfully by means of selfadaptation. The MAES is a nested ES, that is, an outer ES evolves the angles of an inner ES that optimizes the problem itself. Of course, this approach is rather inefficient—as one fitness evaluation of the outer ES causes a whole run of the inner ES on the original problem—but the results demonstrate that the rotation of the hyperellipsoid has a strong impact on the approximation capabilities on problem TR2. The MMES approach is capable of estimating the proper rotation angle and controlling the mutation ellipsoid to approximate the optimum. We use the linear metamodel that will be introduced in Section 5. The rotation angles can be computed estimating the normal vector of the estimated hyperplane and the axes of the mutation ellipsoid. This is an easy undertaking in two dimensions. A comparison between the MMES approach with and with binary search steps shows that it is advantageous to invest search for a precise metamodel estimation: a higher accuracy of the metamodel delivers better approximation results.

Obviously, the coordinate system alignment problem is solved with the mutation ellipsoid rotation. But the selfadaptive rotation does not lead to satisfying results, while the metaevolutionary approach is inefficient. In the following paragraph we will investigate whether the covariance matrix adaptation techniques, which are designed to align coordinate systems, are able to adapt their covariance matrix to constrained problems automatically without a metamodel.
4.4. Covariance Matrix Techniques
Past research on constrainthandling missed to concentrate on covariance matrix adaptation techniques. It is an astonishing fact that no sophisticated constrainthandling techniques for these algorithms have been introduced so far. Nevertheless, we will now analyze whether the coordinate system alignment problem can be solved with covariance matrix adaptation using death penalty. The idea of covariance matrix adaptation techniques is to adapt the distribution of the mutation operator such that the probability to reproduce steps that led to the actual population increases. This idea is similar to the estimation of distributions approaches. The covariance matrix adaptation evolution strategy (CMAES) was introduced by Hansen [46] and Ostermeier [47]. The results of the CMAES on problems TR2 and 2.40 can be found in Table 7. Amazingly, the CMAES is able to cope with the low success rates around the optimum of the TR problem. We observed that the average number of infeasible solutions during the approximation is . This indicates that a reasonable adaptation of the mutation ellipsoid takes place. An analysis of the angle between the main axis of the mutation ellipsoid and the constraint function shows that it converges to zero, the same do the step sizes during approximation of the optimum. Hence, the coordinate system alignment is successful.

We can conclude that the CMAES is able to align the coordinate system automatically without a metamodel. Recent results have shown that an acceleration can be achieved if the covariance matrix is rotated with the help of a metamodel exactly at the time when the constraint boundary is reached [3].
5. Metamodeling of Constraints
In blackbox scenarios the constraint boundaries are not explicitly given. Metamodeling of constraints allows advanced constrainthandling methods. Metamodels can be used for various purposes, for example, for checking the feasibility and for repair of infeasible mutations, and—like we have seen in the previous section—for control of mutation ellipsoids and covariance matrices. Metamodeling of objective functions has developed to a successful standard in evolutionary optimization [48–50].
5.1. Linear Constraint Estimation
For constraint metamodeling various classification and regression methods can be applied. For the case of linear constraints a metamodel that is based on sampling infeasible points and binary search on the segments to the last feasible point has been developed [3]. The approach works as follows: first, the center point of the model estimator is determined. When the first infeasible offspring individual is produced, the feasible parent is the center of the corresponding metamodel estimator and the distance becomes radius of the model estimator. Then, random points are generated on the surface of a hypersphere. Point is the center of a hypersphere with radius , such that the constraint boundary is cut. In dimensions additional infeasible points , have to be produced. The model estimator produces the infeasible points by sampling on the surface of a hypersphere with radius until a sufficient number of infeasible points are produced. The points on the surface are sampled randomly with uniform distribution using the method of Marsaglia [51]. In the first step the algorithm produces Gaussian distributed points and scales the numbers to length . Further scaling and shifting yields randomly distributed point on the hypersphere surface.
In a next step the binary search procedure is applied to identify points on the constraint boundary: the line between the feasible point and the th infeasible point cuts the real constraint hyperplane in point . We approximate with binary search on this segment. The center of the last interval defined by the last points of the binary search is an estimation of point on . Figure 2 illustrates the situation. With regard to the estimated angle error , the real hyperplane lies between and .
In the last step we calculate the normal vector of using the points on the constraint boundary. We assume that the points , , represent linearly independent vectors as the endpoints of the lines they lie on have been generated in a random procedure. A successive GramSchmidt orthogonalization of the th vector on the th previously produced vectors delivers the normal vector of . Note that we estimate the normal vector of the linear constraint model only one time, that is, when the first infeasible solutions have been detected. Later update steps only concern the local support point of the hyperplane (hence, in iteration the linear model is specified by normal vector and current support point ). At the beginning, any of the points may be the support point . For later update steps two cases have to be distinguished. Let be the distance between the mutation ellipsoid center and the constraint boundary at time and let be the number of binary search steps to achieve the angle accuracy of .
(1)The search (i.e., the center of the mutation ellipsoid) approaches : if distance between and becomes smaller than , a reestimation of the support point is reasonable.(2)The search moves parallel to : an exceeding of distance with causes a reestimation of .We use binary steps on the line between the current infeasible solutions and to find the new support point .
For nonlinear constraints other regression or classification techniques may be taken into account like support vector regression or support vector machines [52].
5.2. Feasibility Check
A metamodel can be used to check the feasibility of new solutions in order to reduce constraint function evaluations [3]. For this purpose an exact estimation of the constraint boundary is necessary. Potentially feasible solutions are checked for feasibility with a real evaluation of the constraint function. Two errors for the feasibility prediction of individual are possible.
(1)The model predicts that is feasible, but it is not. Points of this category are examined for feasibility. This will cause an unnecessary constraint function evaluation.(2)The model predicts that is infeasible, but it is feasible. The individual will be discarded, but may be a very good approximation of the optimum.Exemplarily, we take the linear constraint metamodel of the previous paragraph into account and test the feasibility check approach. A safety margin can reduce the number of errors of type 2. We set to the distance of the mutation ellipsoid center and the estimated constraint boundary . Hence, the distance between and the shifted constraint boundary becomes . A regular update of the constraint boundary support point is necessary; see previous Section 5.1. Table 8 shows the results of the CMAES with feasibility check using the constraint metamodel. We can observe a significant saving of fitness and constraint evaluations with a high approximation capability.

5.3. Solution Repair
The repair approach projects infeasible mutations onto the constraint boundary . We assume the angle error that can be estimated by the number of binary search steps . In the solution repair approach the projection vector is elongated by length . Figure 3 illustrates the situation. Let be the support point of the hyperplane at time and let be the infeasible solution. It holds that and . We get
The elongation of the projection into the potentially feasible region guarantees feasibility of the repaired individuals. Nevertheless, it might prevent fast convergence, in particular in regions far away from the hyperplane support point as grows with increasing length of . The center of the hyperplane is updated every generations. The results of the CMAES repair algorithm can be found in Table 9. We observe a significant decrease of fitness function evaluations, in particular on problem TR2. The search concentrates on the boundary of the infeasible search space, in particular on the feasible site.

6. Summary
Many constrainthandling methods exist for evolution strategies, at the head penalty functions. Due to low success rates at the constraint boundary, ES without coordinate alignment techniques often fail to find the optima in the vertex of the feasible solution space. The death penalty step control approach and the multiobjective biologically inspired twosex ES prevent a premature stepsize reduction on some problems, but its success depends on proper parameter settings. Low success rates at the constraint boundary can be increased with coordinate system alignment techniques. A first step into this direction is biased mutation techniques, that is, biased mutation and descent biased mutation. Much better results can be achieved with metamodelbased mutation ellipsoid rotation. This rotation cannot be achieved selfadaptively, but automatically with covariance matrix adaptation mechanisms. The latter shows excellent results, even on hard problems like TR2. Further improvements of the CMAES can be achieved with metamodeling: constraint boundary surrogates can be used for prediction of feasibility of mutations and for repair of infeasible solutions. At last, Table 10 summarizes the best results that could be achieved on the two problems combining the CMAES with covariance matrix rotation, feasibility check, and repair of infeasible solutions.

Metamodeling of constraints will probably become more and more important for future research. Nonlinear models will increase the accuracy of feasibility prediction. Advanced regression methods will improve the accuracy of repaired infeasible solutions. Further constrainthandling methods are imaginable like adaptation of mutation probability distributions and covariance matrices—also with nonlinear metamodels.
References
 C. Floudas and P. Pardalos, A Collection of Test Problems for Constrained Global Optimization Algorithms, Springer, Berlin, Germany, 1990.
 O. Kramer, “Premature convergence in constrained continuous search spaces,” in Proceedings of the 10th International Conference on Parallel Problem Solving from Nature (PPSN '08), pp. 62–71, Springer, Dortmund, Germany, 2008. View at: Google Scholar
 O. Kramer, A. Barthelmes, and G. Rudolph, “Surrogate constraint functions for CMA evolution strategies,” in Proceedings of the 32nd German Annual Conference on Artificial Intelligence (KI '09), pp. 169–176, Paderborn, Germany, September 2009. View at: Google Scholar
 O. Kramer, S. Brugger, and D. Lazovic, “Sex and death: towards biologically inspired heuristics for constraint handling,” in Proceedings of the 9th Conference on Genetic and Evolutionary Computation (GECCO '07), pp. 666–673, ACM Press, London, UK, July 2007. View at: Google Scholar
 O. Kramer and H.P. Schwefel, “On three new approaches to handle constraints within evolution strategies,” Natural Computing, vol. 5, no. 4, pp. 363–385, 2006. View at: Google Scholar
 O. Kramer, C.K. Ting, and H. Kleine Büning, “A mutation operator for evolution strategies to handle constrained problems,” in Proceedings of the 7th Conference on Genetic and Evolutionary Computation (GECCO '05), pp. 917–918, Washington, DC, USA, June 2005. View at: Publisher Site  Google Scholar
 O. Kramer, C.K. Ting, and H. Kleine Büning, “A new mutation operator for evolution strategies for constrained problems,” in Proceedings of the IEEE Congress on Evolutionary Computation (CEC '05), pp. 2600–2606, Edinburgh, UK, September 2005. View at: Google Scholar
 I. Rechenberg, Evolutionsstrategie: Optimierung Technischer Systeme nach Prinzipien der Biologischen Evolution, FrommannHolzboog, Stuttgart, Germany, 1973.
 H.P. Schwefel, Numerische Optimierung von ComputerModellen Mittel der Evolutionsstrategie, Birkhäuser, Basel, Switzerland, 1977.
 H.G. Beyer and H.P. Schwefel, “Evolution strategies—a comprehensive introduction,” Natural Computing, vol. 1, pp. 3–52, 2002. View at: Google Scholar
 A. E. Eiben and J. E. Smith, Introduction to Evolutionary Computing, Springer, Berlin, Germany, 2003.
 J. C. Bean and A. B. HadjAlouane, “A dual genetic algorithmfor bounded integer programs,” Tech. Rep., University of Michigan, Kalamazoo, Mich, USA, 1992. View at: Google Scholar
 A. Fiacco and G. McCormick, “The sequential unconstrained minimization technique for nonlinear programming—a primaldual method,” Management Science, vol. 10, pp. 360–366, 1964. View at: Google Scholar
 A. Homaifar, S. H. Y. Lai, and X. Qi, “Constrained optimization via genetic algorithms,” Simulation, vol. 62, no. 4, pp. 242–254, 1994. View at: Google Scholar
 J. Joines and C. Houck, “On the use of nonstationary penalty functions to solve nonlinear constrained optimization problems with GAs,” in Proceedings of the 1st IEEE Conference on Evolutionary Computation, D. B. Fogel, Ed., pp. 579–584, IEEE Press, Orlando, Fla, USA, June 1994. View at: Google Scholar
 A. KuriMorales and C. V. Quezada, “A universal eclectic genetic algorithm for constrained optimization,” in Proceedings 6th European Congress on Intelligent Techniques & Soft Computing (EUFIT '98), pp. 518–522, Mainz, Aachen, Germany, September 1998. View at: Google Scholar
 R. G. L. Riche, C. KnopfLenoir, and R. T. Haftka, “A segregated genetic algorithm for constrained structural optimization,” in Proceedings of the 6th International Conference on Genetic Algorithms (ICGA '95), L. J. Eshelman, Ed., pp. 558–565, University of Pittsburgh, Morgan Kaufmann, San Francisco, Calif, USA, July 1995. View at: Google Scholar
 C. A. Coello Coello, “Use of a selfadaptive penalty approach for engineering optimization problems,” Computers in Industry, vol. 41, no. 2, pp. 113–127, 2000. View at: Publisher Site  Google Scholar
 K. Deb, “An efficient constraint handling method for genetic algorithms,” Computer Methods in Applied Mechanics and Engineering, pp. 971–978, 2001. View at: Google Scholar
 D. Powell and M. M. Skolnick, “Using genetic algorithms in engineering design optimization with nonlinear constraints,” in Proceedings of the 5th International Conference on Genetic Algorithms (ICGA '93), S. Forrest, Ed., pp. 424–431, University of Illinois at UrbanaChampaign, Morgan Kaufmann, San Francisco, Calif, USA, July 1993. View at: Google Scholar
 F. Hoffmeister and J. Sprave, “Problemindependent handling of constraints by use of metric penalty functions,” in Proceedings of the 5th Conference on Evolutionary Programming (EP '96), L. J. Fogel, P. J. Angeline, and T. Bäck, Eds., pp. 289–294, MIT Press, Cambridge, UK, February 1996. View at: Google Scholar
 A. I. Oyman, K. Deb, and H.G. Beyer, “An alternative constraint handling method for evolution strategies,” in Proceedings of the Congress on Evolutionary Computation (CEC '99), vol. 1, pp. 612–619, IEEE Service Center, Piscataway, NJ, USA, July 1999. View at: Google Scholar
 T. P. Runarsson, “Approximate evolution strategy using stochastic ranking,” in Proceedings of the IEEE Congress on Evolutionary Computation (CEC '06), pp. 2760–2767, IEEE, Vancouver, Canada, July 2006. View at: Google Scholar
 S. V. Belur, “CORE: constrained optimization by randomevolution,” in Proceedings of the Late Breaking Papers at the Genetic Programming Conference, J. R. Koza, Ed., pp. 280–286, Stanford University, Stanford, Calif, USA, July 1997. View at: Google Scholar
 C. A. Coello Coello, “Theoretical and numerical constraint handling techniques used with evolutionary algorithms: a survey of the state of the art,” Computer Methods in Applied Mechanics and Engineering, vol. 191, no. 1112, pp. 1245–1287, 2002. View at: Google Scholar
 S. Koziel and Z. Michalewicz, “Evolutionary algorithms, homomorphous mappings, and constrained parameter optimization,” Evolutionary Computation, vol. 7, no. 1, pp. 19–44, 1999. View at: Google Scholar
 Z. Michalewicz and D. B. Fogel, How to Solve It: Modern Heuristics, Springer, Berlin, Germany, 2000.
 J. Paredis, “Coevolutionary constraint satisfaction,” in Proceedings of the 3rd Conference on Parallel Problem Solving from Nature (PPSN '94), pp. 46–55, Springer, Jerusalem, Israel, October 1994. View at: Google Scholar
 M. Schoenauer and Z. Michalewicz, “Evolutionary computation at the edge of feasibility,” in Proceedings of the 4th Conference on Parallel Problem Solving from Nature (PPSN '96), H.M. Voigt, W. Ebeling, I. Rechenberg, and H.P. Schwefel, Eds., pp. 245–254, Berlin, Germany, September 1996. View at: Google Scholar
 C. A. Coello Coello, “Constraint handling through a multiobjective optimization technique,” in Proceedings of the Genetic and Evolutionary Computation Conference, A. S. Wu, Ed., pp. 117–118, Orlando, Fla, USA, July 1999. View at: Google Scholar
 C. A. Coello Coello, “Treating constraints as objectives for singleobjective evolutionary optimization,” Engineering Optimization, vol. 32, no. 3, pp. 275–308, 2000. View at: Google Scholar
 F. Jimenez and J. L. Verdegay, “Evolutionary techniques for constrained optimization problem,” in Proceedings of the 7th European Congress on Intelligent Techniques and Soft Computing (EUFIT '99), H.J. Zimmermann, Ed., Mainz, Aachen, Germany, September 1999. View at: Google Scholar
 E. MezuraMontes and C. A. Coello Coello, “Constrained optimization via multiobjective evolutionary algorithms,” MultiObjective Problem Solving from Nature: From Concepts to Applications, pp. 53–75, 2008. View at: Google Scholar
 I. C. Parmee and G. Purchase, “The development of a directed genetic search technique for heavily constrained design spaces,” in Proceedings of the Conference on Adaptive Computing in Engineering Design and Control (PEDC '94), I. C. Parmee, Ed., pp. 97–102, University of Plymouth, Plymouth, UK, September 1994. View at: Google Scholar
 M. Schoenauer and S. Xanthakis, “Constrained GA optimization,” in Proceedings of the 5th International Conference on Genetic Algorithms (ICGA '93), S. Forrest, Ed., pp. 573–580, Morgan Kaufman, San Francisco, Calif, USA, July 1993. View at: Google Scholar
 P. D. Surry, N. J. Radcliffe, and I. D. Boyd, “Amultiobjective approach to constrained optimisation of gas supply networks: the COMOGA Method,” in Proceedings of the Evolutionary Computing, AISB Workshop, T. C. Fogarty, Ed., Lecture Notes in Computer Science, pp. 166–180, Springer, Sheffield, UK, April 1995. View at: Google Scholar
 E. M. Montes and C. A. Coello Coello, “A simple multimembered evolution strategy to solve constrained optimization problems,” IEEE Transactions on Evolutionary Computation, vol. 9, no. 1, pp. 1–17, 2005. View at: Google Scholar
 T. Takahama and S. Sakai, “Constrained optimization by the e constrained differential evolution with gradientbased mutation and feasible elites,” in Proceedings of the IEEE Congress on Evolutionary Computation (CEC '06), G. G. Yen, S. M. Lucas, G. Fogel et al., Eds., pp. 1–8, IEEE Press, Vancouver, Canada, July 2006. View at: Google Scholar
 J. Liang and P. Suganthan, “Dynamic multiswarm particle swarm optimizer with a novel constrainthandling mechanism,” in Proceedings of the IEEE Congress on Evolutionary Computation, G. G. Yen, S. M. Lucas, G. Fogel et al., Eds., pp. 9–16, IEEE Press, Vancouver, Canada, July 2006. View at: Google Scholar
 E. MezuraMontes, J. VelazquezReyes, and C. A. Coello Coello, “Modified differential evolution for constrained optimization,” in Proceedings of the IEEE Congress on Evolutionary Computation, G. G. Yen, S. M. Lucas, G. Fogel et al., Eds., pp. 25–32, Vancouver, Canada, July 2006. View at: Google Scholar
 E. MezuraMontes, Ed., ConstraintHandling in Evolutionary Computation, vol. 198 of Studies in Computational Intelligence, Springer, Berlin, Germany, 2009.
 H.P. Schwefel, Evolutionsstrategie und numerische optimierung, Ph.D. thesis, TU Berlin, Berlin, Germany, 1975.
 H.P. Schwefel, Evolution and Optimum Seeking. SixthGeneration Computer Technology, Wiley Interscience, New York, NY, USA, 1995.
 D. V. Arnold and D. Brauer, “On the behaviour of the (1+1)es for a simple constrained problem,” in Proceedings of the 10th International Conference on Parallel Problem Solving From Nature (PPSN '08), pp. 1–10, Dortmund, Germany, September 2008. View at: Google Scholar
 H.P. Schwefel, “Adaptive mechanismen in der biologischen evolution und ihr einfluss auf die evolutionsgeschwindigkeit,” in Interner Bericht der Arbeitsgruppe Bionik und Evolutionstechnik am Institut für Mess und Regelungstechnik, TU Berlin, Berlin, Germany, July 1974. View at: Google Scholar
 N. Hansen, “The cma evolution strategy: a tutorial,” Tech. Rep., TU Berlin, ETH Zürich, Germany, 2005. View at: Google Scholar
 A. Ostermeier, A. Gawelczyk, and N. Hansen, “A derandomized approach to self adaptation of evolution strategies,” Evolutionary Computation, vol. 2, no. 4, pp. 369–380, 1994. View at: Google Scholar
 M. Emmerich, A. Giotis, M. Özdemir, T. Bäck, and K. Giannakoglou, “Metamodelassisted evolution strategies,” in Proceedings of the 7th International Conference on Parallel Problem Solving from Nature (PPSN '02), pp. 361–370, Granada, Spain, September 2002. View at: Google Scholar
 S. Kern, N. Hansen, and P. Koumoutsakos, “Local metamodels for optimization using evolution strategies,” in Proceedings of the 9th International Conference on Parallel Problem Solving from Nature (PPSN '06), pp. 939–948, Reykjavik, Iceland, 2006. View at: Google Scholar
 H. Ulmer, F. Streichert, and A. Zell, Optimization by Gaussian Processes Assisted Evolution Strategies, Springer, Heidelberg, Germany, 2003.
 G. Marsaglia, “Choosing a point from the surface of a sphere,” The Annals of Mathematical Statistics, vol. 43, pp. 645–646, 1972. View at: Google Scholar
 T. Hastie, R. Tibshirani, and J. Friedman, The Elements of Statistical Learning, Springer, Berlin, Germany, 2009.
Copyright
Copyright © 2010 Oliver Kramer. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.