Research Article  Open Access
MEEF: A MinimumEliminationEscape Function Method for Multimodal Optimization Problems
Abstract
Auxiliary function methods provide us effective and practical ideas to solve multimodal optimization problems. However, improper parameter settings often cause troublesome effects which might lead to the failure of finding global optimal solutions. In this paper, a minimumeliminationescape function method is proposed for multimodal optimization problems, aiming at avoiding the troublesome “Mexican hat” effect and reducing the influence of local optimal solutions. In the proposed method, the minimumelimination function is constructed to decrease the number of local optimum first. Then, a minimumescape function is proposed based on the minimumelimination function, in which the current minimal solution will be converted to the unique global maximal solution of the minimumescape function. The minimumescape function is insensitive to its unique but easy to adopt parameter. At last, an minimumeliminationescape function method is designed based on these two functions. Experiments on 19 widely used benchmarks are made, in which influences of the parameter and different initial points are analyzed. Comparisons with 11 existing methods indicate that the performance of the proposed algorithm is positive and effective.
1. Introduction
Global optimization plays a significant role in many fields, for example, science, economics, and engineering. A global optimization problem can be formulated as follows:where , , and is considered to be multimodal and continuously differentiable in this paper.
Early research on single modal global optimization problems had gained many results. However, these achievements cannot solve multimodal global optimization problems (GOPs) effectively. Multimodal optimization problems are difficult to solve for the existing of many local optimal solutions, which often make the optimization algorithms trap into local optimum. More and more attention has been paid for multimodal optimization. As the main tasks of a solution algorithm, finding global optimal solutions of multimodal problems with small computational cost should avoid trapping into local optimal solutions. Deterministic algorithms are developed to deal with these difficulties and the literature review on this work can be referred to [1]. Auxiliary function method, as a kind of deterministic method, provides us an effective and practical idea to jump out from local optimal solutions, for example, filled function method (FFM) [2–7], tunneling method [8], basinhopping method [9], sequential convexification method (SCM) [10], and cutpeak function method [11], and so forth. Auxiliary function is a transformed objective function that constructs a path from one of the local minimizers of the original objective function to another lower local minimizer [4]. Filled function method [2] is such an approach for multimodal problems to find the global minimal solutions, in which the filled function is constructed to help jump from one local optimal solution to another better one. However, this kind method is sensitive to its parameters. Improper settings often brings about “Mexican hat” effect [12], which might make the algorithm fail in finding global optimal solutions.
In this paper, a minimumelimination function is constructed to eliminate the solutions worse than the best solution found so far. Using this function, the local optimal solutions can be decreased. Thus, the number of the local optimal solutions can be reduced significantly. However, flattened by the minimumelimination function, much information of the original optimization problems might be eliminated. Hence, the optimization algorithms still have difficulties to solve the problems. Hence, we proposed a minimumescape function based on the minimumelimination function, which can help the algorithm find the better solutions. Converted by minimumescape function, there will be only one global maximum. Therefore, this function can provide search algorithms directions away from the best solutions found so far and it can help the search algorithms avoid revisiting or trapping into the local minimal solutions that had been found. In this way, the troublesome “Mexican hat” effect can be avoided effectively.
The remainder of this paper is organized as follows: Section 2 introduces the motivation of this paper. Section 3 is dedicated to explaining the proposed minimumeliminationescape function in detail. Section 4 gives the minimumeliminationescape function method for multimodal problems. Experiments on the performance of the proposed algorithm are shown in Section 5. Finally, conclusions on this paper are drawn in Section 6.
2. Motivation
2.1. “Mexican Hat” Effect
When solving multimodal optimization problems, auxiliary function methods can jump out the local optimal solutions with the help of the tailormade auxiliary functions. For many auxiliary function methods, for example, filled function method [2] and stretching function method [13], improper parameter settings often cause an unwilling phenomenon called “Mexican hat” effect [12]. As shown in Figures 1 and 2, this effect often introduces some troublesome local minimum for the auxiliary functions, which might make the search algorithm trap into local optimal solutions and leads to failure of finding global optimal solutions. This troublesome effect often introduces great additional complexity. It can be seen vividly that the local minimum of the auxiliary functions in Figures 1 and 2 can not help the optimization methods jump out from the local minimum of the original functions. In this case, the endless loops will be brought about for the optimization methods, which makes the optimization process failure.
“Mexican hat” effect had been discussed in [12] in detail. Many efforts have been directed towards avoiding this unwilling phenomenon [11, 14]. The cutpeak function method [11] is such an auxiliary function method that can avoid “Mexican hat” effect. However, the cutpeak function method has its disadvantages on losing global optima during searching process.
2.2. Lost Global Optima
Although the cutpeak function method has advantage on keeping the best solutions found so far, it will lose the global optimal solutions especially at the end of the searching process. In [11], the cutpeak function is suggested as follows:where is the best solution found so far, is a positive parameter, and was given two concrete examples as follows: Then, a choice function is constructed to find better solutions as follows:
When the parameter is properly adjusted, the choice function (4) can keep the original function values of some better solutions unchanged and can find the better solutions easily. However, there are two cases that may lead to the failure of finding global optimal solutions. One case is that if the distance between the best solutions found so far and other local optimal solutions is too large, it will be of great difficulty to adopt the parameter . The other case is that if the parameter was larger than a certain value, all the solutions that are better than will be ignored, which will lead to failure in finding global optimal solutions. In particular, if the function values of the local optimal solution and the global optimal solution have little difference, the global optimal solution will be lost with very high probability. It is very difficult to adopt . So as to illustrate the influence of obviously, we use the same example as [11], in which the objective function is , where . It is supposed that the best solution found so far is . The cutpeak function at is constructed as and the parameter is set to be 5 and 30, respectively. From Figure 3, one can see vividly that the cutpeak function can keep the basins better than unchanged when . However, when , there is no better basin that can be kept for the cutpeak function. It can be deduced easily that the smaller the parameter is, the more better basins will be kept. Huang et al. proposed a revised cutpeak function in [15].
Although the cutpeak function can successfully avoid the “Mexican hat” effect, its performance is influenced significantly by its parameter . What is more, it needs much computational cost to adopt the parameter. Therefore, we will design a new parameterinsensitive auxiliary function which can avoid the unwilling “Mexican hat” effect in this paper.
3. MinimumEliminationEscape Function
The proposed minimumeliminationescape function in this section consists of minimumelimination function and minimumescape function. The minimumelimination function is constructed to eliminate the solutions worse than the best solutions found so far, aiming to reduce the influence of local optimal solutions. Then, a minimumescape function is proposed to transform the minimum found so far to the unique global maximum. Also, the minimumescape function can lead optimization algorithms to explore the region far away from the best solutions found so far. The properties of the proposed minimumeliminationescape function are also analyzed in this section.
3.1. MinimumElimination Function
Local optimal solutions have great impact on the performance of an optimization algorithm, especially when there were a large number of local optimal solutions which often leads to the failure of finding global optimal solutions. To some degree, decreasing the number of local optimal solutions is a simple and effective way of reducing the complexity of the problem. Therefore, the minimumelimination function is constructed here to eliminate the solutions worse than the best solutions found so far. In this way, the number of local optimal solutions can be reduced, which will be much helpful to the optimization algorithms.
The minimumelimination function is constructed as follows:where is the original objective function, is the best solution found so far and changing with generation, and is defined as follows:
It is obvious that, for , the minimumelimination function has the following two properties:(i) if ,(ii) if . In other words, only when the better solution was found, the value of changed. These two properties can be intuitively illustrated by smoothing a function of one variable as an example in Figure 4, where the dashed line in Figure 4 represents the original function and the black solid line represents the minimumelimination smoothing function. Here, the example function is taken as , where . The minimumelimination function is generated at the solution .
Flattened by the minimumelimination function, can be converted to the following problem:where is the best solution found so far. The flattened is denoted as for convenience. It is obvious that and share the same global optima. And has fewer local minima than . However, after being flattened by the minimumelimination smoothing function, there will exist much plane in , so that much mathematical information will be lost.
3.2. MinimumEscape Function
As mentioned above, FGP loses much information and contains many flatlands which are very difficult to be managed for an algorithm. Improper handling often brings about additional complexity for solving FGP. One important issue is to design suitable strategies of searching the field far away from the best solutions found so far, aiming to find better solutions or basins. Therefore, a minimumescape function is proposed to deal with the flatlands, which is designed at the best solution found so far as follows:where the unique parameter .
Figures 5–8 show the properties of the minimumescape function vividly. Among these figures, one can have intuitive impression of the proposed functions. In Figures 4 and 5, a 2dimensional example was employed. Figures 6 to 8 illustrate the performance of the minimumelimination smoothing function on the multimodal function and the performance of the proposed minimumescape function on the smoothed function. It can be seen vividly from these 3D figures that flattened by the minimumelimination function, much plane and holes are left. By the constructed function (8), the flattened function will be transformed to an archlike function with as its unique global maximal point. From Figures 5 and 8, we can see obviously that the holes still exist in the slope of the minimumescape functions. These holes may be the same holes of the flattened functions, which are the basins (A basin [16] of at an isolated minimizer is a connected to domain which contains and in which starting from any point the steepest descent trajectory of converges to but outside which the steepest descent trajectory of does not converge to ) of , which are also the basins of where the better local optimal solutions locate. Therefore, the aim of the algorithm is to find these holes. We can imagine that a waterdrop on this vault will slide down along any direction due to the influence of gravity. Its track might go through some of these holes. Motivated by this phenomenon, we considered that a method can be employed as gravity to guide the search process to find the holes, and another method can be used to find the local optimum. In the following subsection, the properties of the proposed function will be analyzed in detail.
3.3. Properties of MinimumEscape Function
From Figures 4 to 8, it can be seen that the minimumescape function has the following properties:(1) is the unique global maximum of , and ;(2) has no stationary points in the region , ;(3)if is not a global minimizer of , then can distinguish the better solution than .
To illustrate the properties of the proposed minimumescape function, we choose as the reference point and generate the following function as a discrimination function: To some degree, can be regarded as the slop of at according to the reference point with . can help us understand the properties of and also can be used to estimate whether the obtained solutions are better than .
Proposition 1. If is continuous, then both and are continuous.
Proposition 2. Suppose that is the best solution found so far. Then, is the unique global maximizer of .
Proof. For ,Assume that there exists another point , such that, for any , . While thus , such that , which conflicts with the assumption.
Therefore, is the unique global maximizer of .
Proposition 3. For , .
Proof. For , it can be concluded from formula (8) that . Therefore, for , it is obvious that . For ,
Proposition 4. For ,
Proof. For , if , it is obvious that holds; if , then
Following Proposition 4, it is obvious that better solutions are with larger slop than worse solutions.
Denote Then,
Proposition 5. Suppose that is a basin of and is the minimizer in , then
Proof. Follow that , . Consider , , Therefore,
According to Proposition 5, one might conclude that if is the global minimizer of , then holds. Unfortunately, this conclusion does not always hold. We will give a counter example to show that the point , which satisfies , is not the global minimizer of . Here, we still use the example function in this section, where and the best point is . In Figure 9, we denote the basins as and , which are surrounded by Ellipses and , respectively. From Figure 9, it can be seen vividly that the global minimum is included in , and is an ordinary basin worse than the global minimum. What is unusual is that does not achieve its global maximum at the global minimum of but achieves its global maximum at a local minimum of in . It might seem strange. Next, we will discuss the reason of this strange phenomenon.
For the convenience of analyzing this phenomenon, the domain of the original problem is expanded to and the objective function is supposed to be an uniformly continuous function, which satisfies that, for , such that . Under this assumption, we can get the following theorems.
Proposition 6. If , such that, for , holds, then
Proof. Consider
Proposition 7. For , suppose that and ; if , then
Proof. If , then and hold. Then
Propositions 6 and 7 can explain why this strange phenomenon occurs. Proposition 6 means that, for , the further away from , will be closer to . If is sufficient far away from , then the value of is so close to that its variation will be tiny. From Proposition 7, we can infer that the closer a basin is to , the greater will change. If a local minimum is closer to the reference point than the global minimum then might take place.
From Propositions 1, 2, and 4, it can be seen that the designed minimumelimination function can efficiently eliminate the solutions worse than the best one found so far. In this way, the original problem can be converted to another global optimization problem sharing the same global optimum, which will be good for solution algorithm to avoid trapping into local minimum. The current best solution can be converted to the global maximizer by the designed minimumescape function. During the search process, the best solution found so far will not be visited twice. The designed minimumescape function can help the solution algorithm to explore the whole domain by reducing the objective function value in proportion to the distance to the best solution found so far. And Proposition 4 provides a method to estimate whether the point is better than the current best solution.
Estimating the parameter is another issue. In most existing auxiliary function methods, estimating parameters is an important and hard task. However, the parameter in the proposed function is not sensitive and can be adopted easily. In general, when is the best solution found so far, the search will go away from . However, when the search goes to a point far away from (usually is away from in order to jump out the current basin), the main problem arisen will be that is too large to result in arithmetic overflow. So should be small in order to avoid this phenomenon. Thus, in the proposed algorithm, it will be no problem when is taken not too large. In this paper, the parameter is suggested to be taken in . In experiments, the influence of parameter will be tested.
4. Solution Algorithm
Based on the strategies described above, we design a new minimumeliminationescape function method, named MEEF for short. In the proposed method, the search process is divided into global search and local search. The gradientfree descent method is employed here to do global search and the gradientbased descent method is employed to do local search. Using gradientfree descent methods can accelerate the speed of finding a better solution, and inaccuracy line searching methods are suggested. The gradientbased descent methods can ensure that the found better solution can be improved to the precise local optima. In the proposed method, Amijo method is employed, and BFGS method is employed to update the found solution.
The executable algorithm MEEF is designed as Algorithm 8.
Algorithm 8. MinimumEliminationEscape Function Method (MEEF)(1)Given initial point , preset the parameter . Let .(2)Start from and use a gradientbased descent method to minimize the original objective function till a local minimum is obtained; if stop condition is satisfied, stop; otherwise, go to Step 3.(3)Construct the minimumescape function according to (8) with respect to in Step 2. Randomly generate uniformly distributed vectors on the unit ball; .(4)Consider . Start from and use a gradientfree method along with direction , if there exists a point such that , , , then go to Step 5; else, go to Step 4.
5. Numerical Experiments and Comparison
5.1. Benchmark Problems
We choose 19 standard benchmark problems from [3] to [17] to test the performance of the designed algorithms, which are listed in the appendix. In these selected test problems, the dimensions are from 2 to 30. It is needed to point out that although and are almost the same, both of these two test problems were reserved for comprehensive comparison with other methods.
5.2. Experimental Settings
In experiments, the algorithm was set as follows.(i)The Amijo line search method was employed as a gradientfree method to do global search in Step 4, and the BFGS method was taken as a gradientbased method to do the local search in Step 2.(ii)In Amijo method, the initial step length is adopted as , where is the dimension of the problems and , are the lower bound and upper bound, respectively.(iii)The parameter . When analyzing the influence of , is taken as and , respectively.(iv)Number of direction vectors , where is the dimension of the problems.(v)Selection of initial points. In analysis of the influence of parameter and initial points, the initial points in experiments are chosen as listed in Tables 2, 3, 4, and 7. In comparison with other methods, the initial point is generated randomly for each test problem.(vi)Stopping criterion: when holds, where is the function value obtained by our algorithm and is the known global optimal function value of the test functions, the execution of the algorithm is stopped.
5.3. Experimental Results and Comparisons
The proposed algorithm MEEF is essentially a random search method. Its performance needs to be analyzed in different aspects. For an auxiliary function method, its performance is often influenced by its parameters and initial points, which should not be avoided. In numerical experiments, the influences of the parameter and the initial points are tested first. Then, comparisons between MEEF and other methods are made.
In each of Table 2 to Table 8, represents the related benchmark problem used in this paper, the dimension of the benchmark problem, FE the number of function evaluations, the mean function value, the best function value, the standard deviation of function value, Succ the ratio of successful runs, and “—” no result reported.
5.3.1. Influence of Parameter
For an auxiliary function method, the parameters often influence its performance significantly. Sometimes, improper parameters might inccrease the complexity of the original problems and affect the global search of the algorithm [14]. In order to analyze the influence of parameter , we took different values for parameter from the suggested interval and used the same initial points in the numerical experiments. Tables 2 and 3 show the results of MEEF using the same initial points with parameter and , respectively.
From Tables 2 and 3, it can be seen obviously that MEEF can find the optimal or closetooptimal solutions for all test problems with different parameter value. Comparing the results listed in Tables 2 and 3, one can find that, with different parameter value, the MEEF can obtain almost the same results using the same function evaluations. This indicates that taking from the suggested interval, the parameter has no big influence for MEEF.
5.3.2. Influence of Initial Points
As well known, the initial point plays an important role for a deterministic optimization method. Auxiliary function methods are a kind of optimization methods which execute deterministic methods repeatedly on the transformed objective function and/or the original objective function to find the global optima of the problems. Thus, It is inevitable that the initial point influences the performance of an auxiliary function method. For another reason, MEEF is essentially a random search method; the initial point might have more influence on its performance. In the experiments, we use different initial points to test their influence. Table 4 shows the results of MEEF on the test problems starting from different initial points. From Table 4, we can find that MEEF can find optimal or closetooptimal solutions for all test problems. Comparing the results listed in Tables 2 and 4, the influence brought by initial points is mainly on the number of function evaluations. At this point, it is reasonable and appears to be very similar to deterministic methods. However, for , when the initial point was far away from the global optimal solution, it will take much computational cost and time cost to reach the stopping condition. Sometimes, MEEF failed in finding closetooptimal solution for .
Table 7 shows the results of MEEF on and with dimension from 2 to 50 starting from different initial points. From same initial points, the number of function evaluations increases with dimension of the problem, which validates intuitionistic deduction on the relationship between function dimension and the number of function evaluations.
5.3.3. Comparisons
For comparison, Ge’s Fill Function method (FF) [3], Levy’s Tunneling function method (Tun) [8], Wang’s auxiliary function method (NAF) [14], the CutPeak function method (CP) [11], SCM method [10], TRUST method [18], Multilevel Single Linkage method (MSL) [19], Diffusion Equation method (DE) [20], Ma’s filled function method (PFFF) [21], and Wei’s filled function methods (NFFA [22] and NFFM [23]) are selected. These methods can find the optimal or closetooptimal solutions effectively. The results of these methods reported in the related literature are used directly for a comparison. Table 5 presents the results obtained by the proposed algorithm and the compared methods.
From Table 5 we can see that our algorithm outperforms NFFM, NFFA, PFFF, and the cutpeak function method. Comparing the results listed in Tables 2 and 3 with the results of CP method reported in [11], NFFA in [22] and NFFM in [23], it can be seen that MEEF can find better solutions using much fewer function evaluations starting from different initial points for all test problems. Also, the termination precision of MEEF is much higher than that of both CP methods, NFFA and NFFM.
So as to compare with NFFA in detail, we executed our proposed method MEEF on with different dimensions according to [22]. The comparison results are shown in Table 6. Table 6 shows vividly that MEEF can find the closetooptimal solutions of successfully for all dimensions in experiments. The precision of the results obtained by MEEF is much higher than that of NFFA. However, when the dimension is more than 12, the function evaluation numbers of MEEF are lager than NFFA, which shows that NFFA outperforms MEEF. It is worth mentioning that the stopping precision of NFFA is set to be , which is much larger than that of MEEF. Also, NFFA cannot obtain the approximate optimal solutions successfully in each run. What is more, the parameter of NFFA was needed to be adjusted according to dimensions and problems as reported in [22]. MEEF does not need to adjust its parameter specially. Thus, it is easy to conclude that MEEF outperforms NFFA over stability and validity.
Compared with Wang’s auxiliary function method NAF, MEEF outperforms this method on , , and from Table 5. However, from Table 8, it can be seen obviously that MEEF outperforms NAF on and with dimension from 2 to 50.
From Table 5, one can see that the performance of MEEF equals that of SCM. But for , MEEF could find the global optimal solution successfully, and SCM failed. For other methods, MEEF has much better performance.
Since only the number of function evaluations of the methods listed in Table 5 is reported in the related papers, we compared the number of function evaluations with that of the methods listed in Table 5. From Tables 2, 4, and 5, it can be seen obviously that the proposed algorithm outperforms the compared method in Table 5. Compared with Wang’s method, the proposed method uses fewer function evaluations on , , , . The proposed method outperforms SCM except . And the proposed method outperforms other methods on function evaluations of the problems listed in Table 5.
Numerical experiments on the problems and with dimensions from 2 to 50 were made by Zhu et al. [10] and Wang et al. [14] to evaluate the performance of their methods. In this paper, we do the same experimental settings for and as in [10, 14] to fair comparison, and the results can be found in Tables 7 and 8. In experiments, the proposed algorithm was executed with two different initial points for each of and and terminated when holds. The final obtained solutions and numbers of function evaluations are recorded in Table 7. For , it can be seen vividly that the influence of initial points is obvious not only on the number of function evaluations but also on the precision of the results. During the experiments on , our algorithm could obtain the global optimum starting with other different initial points. For , it can be seen from Table 7 that both numbers of function evaluations and obtained solutions are with no great difference for MEEF starting with two selected initial points. However, it should be pointed out that MEEF can find the global solution with the preset precision with success in 30 runs when the initial points were taken as and . This indicates that the performance of MEEF on is influenced by the initial points greatly.
In the following experiments, the precision for was set to be which was the same as that of NAF [14], and the initial points were taken randomly. From Table 8, it can be seen that MEEF outperforms other 4 methods on the number of function evaluations obviously form . But for , the comparison results seem to be a little complex. MEEF uses fewer function evaluations than NAF in 15 test cases. From Table 8, it can be seen that SCM seems to have better performance than MEEF on . It should be pointed out that in [10], the precision of the obtained results was not mentioned. Thus, we can not judge which one has better performance. For the tunneling function method [8], only 4 test cases were executed without any solution precision. We can not do comprehensive comparison.
5.3.4. A Phenomenon for Discussion
Careful readers might find a strange phenomenon in Table 8. That is the number of function evaluations of SCM and NAF on and does not present promptly increasing trend with the increasing of the dimension. The number of function evaluations varies greatly. For example, for , NAF uses 14956 function evaluations to obtain the global solution with the preset precision when the problem dimension is 21. While the number of function evaluations is down sharply to 93 when the dimension is 22. This phenomenon is against our intuitive idea that the number of function evaluations increases with the dimension of the problem. In traditional opinion, the higher the dimension is, the more local optimum there will be which will take much computational cost. It might be a strange and interesting topic that why this unreasonable phenomenon happened.
6. Conclusions
In this paper, a new minimumeliminationescape function method for global optimization is proposed, which can avoid the “Mexican hat” efficiently. In the proposed method, the minimumelimination function is constructed to eliminate the solutions worse than the best one found so far. In this way, the influence brought by the local optimum will be reduced. Flattened by the minimumelimination function, the original problem can be transformed to another optimization problem, which shares the same global optimum with the original problem with fewer local optimum. A new minimumescape function with one parameter is constructed for the flattened problem. The properties of minimumescape function are analyzed theoretically. Based on the two proposed functions, a minimumeliminationescape function method for multimodal optimization is constructed. Theoretical analysis and experimental results indicate that the minimumeliminationescape function method is insensitive to its unique parameter, which can be set easily. In the experiments, the proposed algorithm can find the global optimal solutions of all 19 selected problems. The numerical experimental results show that the proposed algorithm can converge rapidly to the global optimum with high precision. Compared with 11 existing methods, we found that the proposed algorithm performs much more stably and effectively.
Appendix
Benchmarks
(i)Sixhump camelback [24]: where . The global minimizers are and with the global optimal value being .(ii)Branin [25]: where . The global minimizers are , , and with the global optimal value being .(iii)GoldsteinPrice (GP) [26]: where , . the global minimizer is with the global optimal value being .(iv)Rastrigin [7]: where , . This function has about 50 minimizers; the global minimizer is with the global optimal value being .(v)Simplified Rosenenbrock problem [3]: where , . The global minimizer is with the optimal value being .(vi)Threehump camelback problem [3, 10]: where , . The global minimizer is with the optimal value being .(vii)Treccani problem [3, 10]: where , . The global minimizer is and with the optimal value being .(viii)Twodimensional Shubert problem I [3, 8, 10]: where , . This function has 760 minimizers and 18 of them are global minimizers with the global optimal value .(ix)Twodimensional Shubert problem II [3, 8, 10]: where , . This function has 760 minimizers and only one global minimizer , with the global optimal value .(x)Twodimensional Shubert problem III [3, 8, 10]: where , . This function has 760 minimizers and only one global minimizer , with the global optimal value .(xi)Shekel’s Family [14]: where . These functions have local minima ( for , , , resp.) and with local optimal value . The global minimizer , . are taken as shown in Table 1.(xii)Sinesquare I [27]: where , for . This function has about 60 minimizers, the global minimizer is with the global optimal value being .(xiii)Sinesquare II [27]: where and , for . This function has about 30 minimizers, the global minimizer is with the global optimal value being .(xiv)Sinesquare III [27]: where , for . This function has about 180 minimizers, the global minimizer is with the global optimal value being .(xv)Generalized Schwefel’s Problem [17]: where , for . The global minimizer is with the global optimal value being .(xvi)Generalized Rastrigin’s function [17]: where , for . The global minimizer is with the global optimal value being .(xvii)Test function [10, 14]: where . This function has local minimizers and only one global minimizer with global optimal value being .





