Abstract

This paper presents global optimization algorithms that incorporate the idea of an interval branch and bound and the stochastic search algorithms. Two algorithms for unconstrained problems are proposed, the hybrid interval simulated annealing and the combined interval branch and bound and genetic algorithm. The numerical experiment shows better results compared to Hansen’s algorithm and simulated annealing in terms of the storage, speed, and number of function evaluations. The convergence proof is described. Moreover, the idea of both algorithms suggests a structure for an integrated interval branch and bound and genetic algorithm for constrained problems in which the algorithm is described and tested. The aim is to capture one of the solutions with higher accuracy and lower cost. The results show better quality of the solutions with less number of function evaluations compared with the traditional GA.

1. Introduction

Many problems in economics, business, sciences, and engineering are modeled as constrained optimization problems:

There are various approaches to the problems. Interval algorithms use branch and bound techniques to capture all solutions. One drawback is that they often require more memory and CPU time [1]. Stochastic search algorithms are usually easy to implement and no assumption about the continuity and differentiability is required. They are commonly used in numerous fields. Even though there is no guarantee on the solution reported when the algorithm is terminated in a finite time, the theoretical support is only on the convergence in probability. We have an interest in combining the interval branch and bound technique with stochastic search. We first study the combined algorithms for unconstrained problems and then modify them to handle the constrained problems.

The paper is organized as follows. In the next section, the related algorithms are introduced, namely, the interval branch and bound, simulated annealing, and the continuous genetic algorithms. The studies of the improvement of the described algorithms are also discussed. Our proposed algorithms are presented in Section 3. We demonstrate two algorithms for unconstrained and one for constrained problems. In Section 4, the numerical experiments and the discussion are given, followed by the conclusions in Section 5.

2. Interval Branch and Bound and Stochastic Algorithms

2.1. Interval Branch and Bound
2.1.1. Unconstrained Optimization

An interval algorithm is a tool using interval arithmetics for finding all solutions of the optimization problems. Before the discussion of the algorithms, let us first introduce the notations that we will use in this paper.

For an unconstrained problem, we study .: A search domain: The range of over : a set of real compact interval , : a set of -dimensional column vectors an inclusion function of will be called “box”: A number of subregions: A number of sample points from a given boxlistx:  A list containing the boxes and their information : A number of boxes in listx or the length of listx: Temperature: An initial temperature for cooling schedule of simulated annealing: Maximum number of iterations: Midpoint of an interval , : The width of an interval , If , is the width of the box .: A tolerance for the width of the box: A tolerance for the width of the interval : The best value of all the algorithm has been encountered: A vector corresponding to : A value of at iteration : A lower bound of interval : An upper bound of interval .

An interval branch and bound have the procedures given in Algorithm 1.

(1) Put the domain into list.
(2) repeat
(3)  Choose a working box, called , and bisect it into two subboxes and and put them on the list.
(4)  Delete from the list.
(5)  Discard the box in the list if it has no solution.
(6) until (the termination criteria hold)

A working box could be selected to be the one that has been on the list the longest or the one with the least lower bound of its inclusion function . The bisection direction usually is the direction with the maximum width. Box will be discarded if . The termination conditions can be set by using a prescribed maximum number of iterations, the width of the box, or the width of the interval . A detailed discussion can be found in [2]. When the algorithm ended, all optima are contained in the boxes of the list.

The algorithm shows difficulty when the dimension is high or the function is complicated. The width of the boxes in the list decreases very slowly so that the improvement of the best is found.

We describe here two versions of the interval algorithms. The first is proposed by Ichida and Fujii, in which the working box is the box with the least lower bound of . The second is Hansen’s algorithm, in which the working box is the oldest box in the list.

2.1.2. Constrained Optimization

For constrained problems, a box will be deleted from the list with the condition only if all points in are feasible. The feasibility of a box is considered by using a flag vector where . The element is assigned by the following rules.

For an inequality constraint, if then . If , . Otherwise, it is indeterminate, .

For an equality constraint, we will consider the relaxed problem and is set according to the width of and the bound of as follows:If and if ()elseelse.

The status of a box is taken through the flag vector . If at least one element of is 2, the status is labeled as 2 (this box will be deleted from the list). If all elements of are 0, the box is feasible and the status is set to be 0. Other than that the status is 1. Usually, a box with status 1 will be bisected.

2.2. Simulated Annealing

Simulated annealing (SA) is a stochastic search technique, analogy with thermodynamics. There is a mechanism in avoiding entrapment in local optima by allowing an occasional uphill move. It also incorporates a temperature parameter into the procedure, explore more at high temperature, and restrict it when the temperature is low. The basic idea of the search is that, for a given current state with an energy level , generate a subsequent state randomly. If then accept state as the current state. Otherwise accept state with the probability where is the temperature. For the minimization problem, the solution corresponds to a state of the system and the objective function corresponds to the energy level. Prior to the process, the cooling schedule and the neighborhood structure must be defined. SA is described in Algorithm 4. A thoroughly discussion can be found in [4].

2.3. Population Based Methods

Population based methods use a population of points in each iteration. One advantage of populations is that if it has multiple optimums it will be captured in its final population. The examples of such techniques are genetic algorithm (GA), ant colony, particle swarm, and differential evolution. They are widely used in business, sciences, and engineering. The simple GA will be described next.

GA imitates the natural evolution involving three processes, creation of the offsprings, selection, and mutation. The creation of offsprings offers the diversification for the search, but the selection process is narrowing it. The mutation prevents the entrapping in the local minimum. GA is outlined is Algorithm 5.

The main idea of the selection process is to choose better individuals for the next generations by considering the fitness of each one. We are now concentrating on minimization problem; thus the better fit individual is the one with the lower value of . Good selection schemes must allow the convergence to the optimal solution without getting caught in a local minimum. There are many proposed selection techniques and the study of their convergence.

Some of the methods are, for example, elitist selection (the best individuals of each generation must be selected), the proportional selection (the better fit individuals have higher probability to be selected), the ranking selection (the individuals are ranked according to their fitness and the selection is based on this ranking), and the tournament selection (individuals are divided into subgroups and the members of each group compete against each other, and then only one is selected to be in the new generation). The mutation rate is set up such that converges to when is increased. The termination condition can be set to be the maximum number of generations or the unchanged of the best value over a given number of generations. The comparison of performance of the selection schemes including the convergence can be seen in, for example, [57].

The disadvantage of GA is that there is no guarantee for finding optimal solutions within a finite amount of time and the tuning of parameters such as population size or mutation rate are sometimes based on trial and error. However, there are no requirements on differentiability and continuity.

2.4. The Modifications

The previous described algorithms can be modified for an improvement. Some of them that related to our works are given next.

For the interval branch and bound algorithms, the adjustment can be made to the following aspects for better solutions:(i)an inclusion function: the kite inclusion function [8],(ii)the subdivision of domain: multisection [810],(iii)the selection of a box for further process: the box with the largest rejection index such as the one defined in Casado et al. [11], where represents the rejection index of a box .

The studies for the better accuracy of solutions and the speed of algorithm when using an interval branch and bound to solve constrained problems are, for example, as follows.(i)Casado et al. [11] define a rejection index of a box as in (2) to identify a good candidate box to contain a global minimum.(ii)Lagouanelle and Soubry [8] use a new inclusion function called kite enclosure.(iii)Sun and Johnson [12] introduce local sampling strategies to the working box. The convergence proof is presented along with the numerical results.(iv)Karmakar and Bhunia [13] demonstrate how to obtain one of the solutions by partitioning the accepted box (start with the search domain) into subboxes where is a number that each edge is divided. Calculate the function values of each subbox and then use interval order relations to choose a new accepted box.

There are many approaches in handling constraints for GA. The classifications can be seen in [3, 14]. We describe those that combine interval branch and bound and genetic algorithm.

Alander [15] suggests two ways in combining the two algorithms. The first is replacing, at least partly, the function to be optimized by some of its interval extensions. The other is using GA in some internal problems of the interval algorithm.

Sotiropoulos et al. [16] use an interval branch and bound technique to create subregions and choose the midpoint of each subregion to be individuals in an initial population for GA.

Zhang and Liu [17] use rejection index (2) as a fitness function for a box where is the best found. The population is taken from highest fitness boxes. Mutation is performed on 1/3 of the population with some probability. The best fit box is split into subboxes.

An attempt to combine the interval branch and bound and simulated annealing for unconstrained problem can be seen in Shary [18]. The lower bound of is used in calculating the probability of accepting a new box. If this box is accepted, it will be bisected and the half with smaller lower bound of is chosen to be the next leading box. Otherwise, choose a different box. It is tested on a six-hump camel function and Rastrigin’s function with the domain .

3. The Proposed Algorithms

We first study the performance of Ichida-Fujii (A2) and Hansen (A3) for a design of our algorithm. A set of unconstrained problems in Appendix A is used. Tables 1 and 2 show that Ichida-Fujii is more effective than Hansen in terms of the speed (number of iterations), storage (the list length), and cost (number of function evaluations). It appears that searching and bisecting the box with the least lower bound is a fastest way to reach the optimal solutions.

For unconstrained problems if we assume that is continuous, we know for certain that the box with the least lower bound of must contain a minimum. The earlier a quality can be discovered, the faster the unwanted boxes can be deleted.

To obtain a quality , we consider combining SA and GA with an interval branch and bound. SA and GA act as search engine while the interval branch and bound are responsible for keeping all solutions.

When dealing with constrained problem the deletion condition will be in effect only if the box is feasible or infeasible. For most problems, the list will contain a high percentage of the boxes with status 1 which is indeterminate. That means the width of the box must be small enough in order to split feasible from the infeasible region. The situation is even worse when the dimension of the problem is high.

There are two concerns in our algorithm as follows:(i)choosing a potential box to search: as a result, a quality can be obtained to promote the deletion of unwanted boxes,(ii)bisecting the box to isolate a feasible region.

For unconstrained problem, we can search in a box with the least lower bound of . However, for constrained problem we need a function that provides information about the value of and the lower bound of in making a decision about which box to search for a better . Let us first define a function fit to be the minimum value of among the feasible points from a box . If the box contains , fit is getting close to when the width of approaches 0. The information of and is combined by using a function called fun defined by is a nonincreasing sequence and . Note that if for all , the algorithm uses in making decision for a searched box. And when for all , only function fit which related to will be considered.

We will first present two algorithms for unconstrained problems to study their efficiency in terms of the speed, storage, and cost. Then the algorithm for constrained problems is described. Our goal is to get information of the search region from the interval branch and bound and then provide it for the continuous genetic algorithm to improve its efficiency.

3.1. Hybrid Interval Branch and Bound and Simulated Annealing for Unconstrained Problems

Our algorithm uses simulated annealing as a mechanism that encourages the search in a promising box, at the same time avoiding entrapment in a local minimum. It is described in Algorithm 6. We choose the box with maximum difference of the best value found so far and the least lower bound of , , to be bisected. Since , it might result in being able to discard half of the box.

Algorithm 6 is different from [18] in the selection of a working box. We also evaluate more than one point to update the value of the best found so far, .

The stopping criteria are or for all boxes in listx, or the maximum number of iterations has exceeded a prescribed value .

We use a linear cooling schedule by prescribing the number of iterations at each temperature. The temperature is changed by using the given cooling rate; that is, new temperature = cooling rate temperature.

For Algorithm 6, the parameters that can be adjusted are the number of the initial boxes , an initial temperature , and the annealing schedule.

Note that the value of fit can be obtained by performing some iterations of your choice of search algorithm in box . In each iteration, fit value of two boxes, and , is calculated. Thus, can also be updated.

The Convergence. The following behaviors can be observed from the mechanism of Algorithm 6.(i)At an initialization stage, the probability that the box in listx is selected to be a working box is . In each iteration every box has a probability of to be a box . Both and will be searched by randomly choosing points and recording the best found in (Algorithm 7). Thus, is a decreasing sequence. This process can be viewed as a random search in the domain that shrink over time.(ii)For each box in listx, the following inequality holds: (iii)At iteration , or will be bisected depending on whom has a higher value of .(iv)It is possible that there is a box in which the width is not small and survives through iterations. This means for some where . Since decreases with , this will be selected to be sometimes later. Therefore, a sequence of is not decreasing.(v)Every box has a nonzero probability to be bisected to make smaller, although those boxes in listx have different size. We can conclude that as for .

We can show that all solutions will be in listx after the algorithm successfully terminates.

Let us assume the properties of an inclusion function as . Denote to be a union of all boxes in listx at iteration ; that is, :

From the discarding rule the box will be discarded if . Since and , the solution is still in the list. Therefore, all boxes in the list contain solutions; that is, .

Suppose for all . There exist a box where and . The properties as and imply that as .

Now suppose that is a box containing the minimum of ; then fit. With the properties and , we can conclude that .

3.2. Integrated Interval Branch and Bound and GA for Unconstrained Problems

Algorithm 8 will combine a population technique with the interval branch and bound to give a higher probability in obtaining a better value of the best found, . Thus, more boxes will be deleted from the list. The population is selected from boxes with the least lower bound of to create a new set of candidates using the linear crossover from Michalewicz’s book [3] (Algorithm 10). The information of two points is combined and the two outputs are controlled to be in the search domain. They are not necessary in a set of working boxes. Only the best points are carried to the next generation.

The parameters that can be adjusted for the performance of Algorithm 8 are the following: the number of boxes , the number of individuals per generation , the number of working boxes , the number of individuals to include in the new generation , and the maximum number of iterations. There are also two procedures that can be changed, the process of creating a new set of points and the rule for selecting a new generation (Algorithm 9).

3.3. Integrated Interval Algorithm and GA for Constrained Problems

The major disadvantage of the interval methods is that they require more memory and CPU time than the noninterval algorithms. We propose an algorithm that integrates a known bound from an interval algorithm and the quickness of a genetic algorithm. Of course, a certainty of the solutions is lost. However, an improvement of the quality of the solution and a reduction of the cost are gained.

Let us first introduce additional functions which will be used in the algorithm. is 0 if at least one feasible point has been found from box . Otherwise, it is 1. Elements of the flag vector corresponding to box are 0 or 1. Therefore, we define nviol_box to be the sum of the elements of a flag vector . It roughly indicates the amount of constraint violation for a box . The status of a box in the list is either 0 or 1, since a box with status 2 is discarded right after becoming known. For constrained problem is modified. The term is added, taking care of feasibility. Consider In the case that a feasible point is not discovered in Step 3 of Algorithm 11, the upper bound of will be assigned to .

We use a simple GA without mutation in Algorithm 12. The mutation is omitted because GA will be invoked in every iteration. The parameters that related to GA are a number of individuals in each generation and the maximum number of generations. The difference of Algorithm 11 and GA is pointed out next.

In GA, an initial population is randomly chosen from a search domain. Then this population is evolved through the three operators crossover, selection, and mutation. The change of individuals in a population is through the creation of children and mutation process.

In Algorithm 11, GA is performed in every iteration with the assigned value of maximum generations, . An initial population consists of the best individual, , and the individuals randomly chosen from a given box considered as a potential region for a better solution. In a big picture, it is similar to performing mutation with the rate of one at every iteration. The mutation is biased because it is restricted to those in the promising region, which is listx[idsearch]. After this, the offsprings and populations are allowed to be in . For Algorithm 11, the convergence is achieved when no box can be discarded. Thus those boxes left in the list have or .

4. Numerical Results

4.1. Unconstrained Problems

Tables 3 and 6 show the value of found by Algorithms 2, 3, 4, 6, and 8 for and , respectively. The maximum number of iterations is set to be 400,000 and 600,000. In the tables, A1 stands for Algorithm 1 and similarly for other algorithms. In Algorithms 6 and 8, the maximum length of listx, the number of iterations, and the number of function evaluations are the maximum number taken over ten runs. The tolerance is set. The algorithm is successfully terminated with for all ten runs. Since is in the range of , the table presents only .

(1) Set .
(2) Calculate and set .
(3) Initialize listx  .
(4) repeat
(5)   Bisect in the direction of the maximum length of the edge such that .
(6)   Calculate and .
(7)   Remove from listx.
(8)   Enter the pairs and into listx in a way that in the list do not decrease.
(9)   Denote the first box in listx as .
(10)  Let and calculate .
(11)  
(12)  Delete a pair from listx if .
(13) until (the stopping condition becomes true)

(1) Set .
(2) Calculate and set .
(3) Initialize listx  .
(4) repeat
(5)   Bisect in the direction of the maximum width such that .
(6)   Calculate and .
(7)   Remove from listx.
(8)   Enter the pairs and at the end of listx.
(9)   Discard a pair from listx if .
(10)  Denote the first pair of listx by .
(11)  
(12) until (the stopping condition becomes true)

(1) Set up an initial state , initial temperature , the number of iterations for a fixed temperature and .
(2) repeat
(3)   for   to   do
(4)    Randomly select a new state from its neighborhood.
(5)    
(6)    Calculate .
(7)    Generate a random number uniformly in the range .
(8)    if     then
(9)     .
(10) end if
(11) end for
(12)  Update .
(13) until (the stopping condition becomes true)

(1) Set .
(2) Set up an initial population of individuals .
(3) repeat
(4)  Create offsprings from population and put them in a set .
(5)  Select a new generation from and .
(6)  Perform mutation to individuals in with probability .
(7)  Set .
(8) until (the termination condition is met)

(1) Set .
(2) Subdivide the domain into boxes and put them in listx.
(3) Randomly select an active box from listx, called .
(4) Calculate fit using Algorithm 7.
(5)   fit.
(6) repeat
(7)   Pick a new box at random from listx excluding , called .
(8)   Calculate fit.
(9)   Calculate .
(10)  Calculate , .
(11)  Randomly select a number Unif.
(12)  if     then
(13)   if     then
(14)    Bisect in the direction of the maximum length of the edge such that .
(15)    Delete from listx. Put and into listx.
(16)    Set .
(17)   else
(18)    Bisect . Delete from listx. Put and into listx.
(19)    Set .
(20)   end if
(21)  else
(22)   if     then
(23)    Bisect . Delete from listx. Put and into listx.
(24)    Set .
(25)   else
(26)    Bisect . Delete from listx. Put and into listx.
(27)    end if
(28)  end if
(29)   Calculate fit using Algorithm 7.
(30)   fit.
(31)   Update using annealing schedule.
(32)   Check deletion criteria to remove some boxes from listx.
(33) until (stopping criteria are satisfied)

(1) Randomly select points from a given box and evaluate of each point.
(2) Let be the minimum value of among the sample points.

(1) Subdivide the domain into boxes and calculate .
(2) Put the box along with its lower bound of in listx in nondecreasing order of
that is, listx  = .
(3) Set .
(4) Let be a set of the first boxes on listx.
(5) Randomly select points from and put them in a set . Evaluate value for each point in .
(6) Set to be the minimum value of from .
(7) repeat
(8)   Generate a new set of points, , from using Algorithm 9.
(9)   Evaluate value of each point in . Update .
(10)  Let be a box with the maximum value of where .
(11)  Bisect in the direction of the maximum length of the edge such that .
(12)  Calculate and .
(13)  Remove from listx.
(14)  Enter the pairs and into listx in a way that do not decrease.
(15)  Let be a set of the first boxes on listx.
(16)  Prepare by choosing points with the lowest value of from
  the two sets and . The other points from .
(17)   Delete a pair from listx if .
(18)   Set .
(19) until (the stopping condition becomes true)

Reqiure: a set , a number of points
(1) repeat
(2)  Randomly select two points from .
(3)  Perform linear crossover using Algorithm 10 and put the output points in
(4) until (the number of points in is )
(5) return a set

Require:   where
(1) Randomly select an integer .
(2) Randomly select from the following given interval.
(3) if , .
(4) if , .
(5) if , .
(6) .
(7) .
(8) return  

(1) Set .
(2) Subdivide into boxes and put them in listx.
(3) Calculate fit using Algorithm 7 for every box in listx and set the value of ifit_box().
(4) fit where   listx.
(5) Randomly choose an integer idactive  . Denote the box in listxidactive by .
(6) Calculate nviol_box() and .
(7) repeat
(8)   Randomly choose an integer idnew  idactive}. Let be the box in listxidnew.
(9)   Calculate nviol_box() and .
(10)  Calculate funfun where fun is defined in (6).
(11)  Randomly select a number Unif.
(12)   If     then
(13)    idsearch  =  idactive
(14)  else
(15)    idsearch  =  idnew
(16)  end if
(17)  Perform GA by using Algorithm 12.
(18)  Let idbisect be an integer corresponding to the box with nviol_box(), nviol_box()}.
(19)  Bisect the box from listxidbisect as and .
(20)  Delete listxidbisect.
(21)  Check status of and . Discard the box with status 2.
(22)  Put and at the bottom of listx.
(23)  if (idsearch    idbisect) then
(24)   idactive  =  
(25)  else
(26)    idactive  =  idsearch
(27)  end if
(28)  Check deletion criteria to remove some boxes from listx.
(29)    Update using annealing schedule.
(30)    Update .
(31) until (stopping criteria are satisfied)

(1) Set up an initial population . It consists of and the other points from the box corresponding to idsearch.
(2) for   to   do
(3)   Use linear crossover, Algorithm 10, produces a set of individuals where .
(4)   Choose the best from and and put them in . For
  feasible, the less value is the better. For infeasible, the less violation,
  measured by , is the better.
(5)   Update and fit value of the box if it applies.
(6) end for

Table 4 displays the maximum length of listx and the number of iterations used in algorithm of Ichida-Fujii (A2). Since A2 uses the least storage, the ratio of the amount of the storage used by the other algorithms and A2 is presented in the tables. For example, in problem 1 the maximum list length of A4 is 4.2 times of the maximum list length of A1, which is about 1714. Table 5 shows the number of function evaluations of A2 and the ratio of the number of function evaluations used by the other algorithms and A2 for . Tables 6, 7, 8, and 9 present similar results for and , respectively.

The observations from the numerical results are the following.(1)Ichida-Fujii (A2) works best.(2)Our proposed algorithms (A6 and A8) are faster than SA (A4) and Hansen (A3).(3)The hybrid interval SA (A6) can handle a higher dimensional domain better than SA (A4).(4)Maximum list length used by A6 and A8 is mostly about 1-2 times of the used by A2 for even though A8 uses a lot more of function evaluations. It implies that an effort on function evaluations does not contribute much to the reduction of the boxes. However, it shows that the structure of the algorithm can keep the storage under control. It suggests using a small number of sample points or number of individuals in the population.(5)Algorithms 6 and 8 use a lot more storage than A2 in problem 5 but works better when the dimension is higher.(6)Even if the algorithm found a high quality of at an early iteration, it may not be able to discard some boxes right away because those boxes are not small enough that the condition on the value of the lower bound of will be satisfied. At each iteration only one box is bisected; the removing process is put on hold.(7)When , Algorithm 6 does not work for problem 11. The termination is due to the memory before the reasonable result is obtained.(8)Algorithm 6 still works quite well when is higher, but the population based method, A8, shows the trouble with the memory.

The result suggests that the structure of the algorithm as in A6 seems to handle the length and the number of function evaluations quite well. However, using population based method captures the best faster. Therefore, the number of populations and the maximum generation must be adjusted for not having to waste too much of the number of function evaluations. This information influences the development of A11 for constrained problems.

4.2. Constrained Problems

The parameters setting for Algorithm 11 is described next. The maximum generation for GA in each iteration is set to be in the range of 15 and 25. The population size is 8–16 depending on the size of the domain. An initial is 1. Both cooling temperature and a sequence of are linearly decreasing. A parameter , line 2 of A11, is set to be 2.

In problems 2, 3, and 5, Algorithm 11 is terminated with the condition that no box in the list can be processed ( or ). The other problems are terminated with the maximum iterations 10,000.

Table 10 shows information about the problem and the experimental results: the dimension, the number of constraints, the maximum width of the search domain, the error of found by Algorithm 11, and the error from regular GA. The seventh column is the ratio of the number of function evaluations used by GA to the number used by Algorithm 11. GA usually uses more of the number of function evaluations except for the last two problems that it uses about the same amount of number of function evaluations but Algorithm 11 discovers better solutions. The last column shows the percentage of the reduction of the number of function evaluations when Algorithm 11 is used.

Algorithm 11 successfully found optimal solutions with the condition of no box to be processed in problems 2, 3, and 5. For the other problems, A11 reports better solutions with less number of function evaluations compared with the traditional GA.

Notice that the test problems only consist of inequality constraints. For the equality constraints or the mixed one, the results are not impressive. The branch and bound process does not provide good information about solutions at an early stage. All boxes in the list still have status 1 when the algorithm reaches the maximum number of iterations.

5. Conclusions

We proposed two hybrid algorithms for unconstrained problems, Algorithms 6 and 8. Metropolis criterion is used for choosing a search box. Algorithm 6 performs the search by random sampling and Algorithm 8 by GA. The box to be bisected is considered by the maximum difference of the best value found by the algorithm and the lower bound of an inclusion function of the box. Both of them perform better than the traditional SA and Hansen’s algorithm, where the bisected box is considered by age. The structure of Algorithm 6 is modified to handle constrained problem, Algorithm 11.

The contribution of our proposed Algorithm 11 is a good structure of the algorithm which gives the following advantages.(1)It reduces the number of function evaluations of the regular genetic algorithm.(2)If the problem is not so complicated, the solution is ensured by the branch and bound process. This is the advantage of our hybrid algorithm over GA. If the storage is limited, the quality of the reported solution is still acceptable. Moreover, the region that might contain optimum is still in the list. If required, the local search can be applied to that region.(3)The branch and bound process, which is easy to implement, is responsible for providing the potential individuals for GA. It can be viewed as acceleration for GA.

One weak point of our algorithms is that the deletion process is not activated until the box is small enough, although a high quality is found in an early iteration.

A further study includes the division of a box and the technique for handling equality constraints.

Appendices

A. Unconstrained Problems [1]

(1)Modified Rosenbrock function over (2)Zakharov function over (3)Sphere function over (4)Schwefel function 2.22 over (5)Schwefel function 2.21 over (6)Step function over (7)Generalized Rastrigin function over (8)Modified Griewank function over (9)Another modified Griewank function over (10)Locatelli’s modification #2 of Griewank function over (11)Locatelli’s modification #3 of Griewank function over where if and .

B. Constrained Problems [3, 19]

(c1) subject toThe bounds: and , (c2) subject toThe bounds: and , (c3) subject toThe bounds: and (c4) subject toThe bounds: and (c5) subject to whereThe bounds: and , (c6) subject toThe bounds: and (c7) subject toThe bounds: and (c8) subject toThe bounds: and ,,(c9) subject toThe bounds: and,, (c10) subject toThe bounds: and , .

Conflict of Interests

The author declares that there is no conflict of interests regarding the publication of this paper.

Acknowledgments

The author would like to thank Professor Min Sun for his advice and encouragement on the interval algorithm. His kindness is gratefully acknowledged. This work was supported by Chiang Mai University, Thailand.