Abstract

The filled function method is an effective approach to find the global minimizer of multidimensional multimodal functions. The conventional filled functions are numerically unstable due to exponential or logarithmic term and sensitive to parameters. In this paper, a new filled function with only one parameter is proposed, which is continuously differentiable and proved to satisfy all conditions of the filled function definition. Moreover, this filled function is not sensitive to parameter, and the overflow can not happen for this function. Based on these, a new filled function method is proposed, and it is numerically stable to the initial point and the parameter variable. The computer simulations indicate that the proposed filled function method is efficient and effective.

1. Introduction

More and more practical problems in science, economics, engineering, and other fields can be formulated as global optimization problems. Lots of researchers have been attracted to the field of global optimization. In recent years, many new theoretical and computational contributions have been reported for solving global optimization problems. Global optimization is mainly concerned with the characteristics and algorithms for the multimodal functions. In general, the existing approaches can be classified into two categories: deterministic methods ([116], e.g.) and probabilistic methods ([1722], e.g.). The typical examples of the former are the filled function method (FFM) [39, 1315], the trajectory method [2, 10], the tunneling method [12], and the covering method [1, 16], whereas ones of the latter are the clustering method [17], the methods reported in [18, 19], the simulated annealing method [22], and genetic algorithms [20, 21]. Furthermore, some hybrid deterministic and probabilistic algorithms ([23], e.g.) were proposed to solve practical problems.

However, the existence of multiple local minima of a general nonconvex objective function makes global optimization become a great challenge. For global optimization problems, there are two major issues. How to find a lower minimizer of the objective function from a known local minimizer. How to evaluate the convergence and, accordingly, design the stopping criteria.

In this paper, we will focus our research on the FFM and mainly issue (1). Among the existing methods for global optimization problems, the FFM appears to have several advantages over others mainly due to its relatively easy realization with a process that aims at successively finding smaller local minima. The FFM was firstly proposed by Ge in [3], which was used to solve the global minimizer of unconstrained multiextremum function. Later, many scholars have also done a lot of valuable work to improve this method ([59, 1315, 2428], e.g.). However, conventional filled functions are often nondifferentiable ([13, 25, 26], e.g.), need more than one adjustable parameter ([15], e.g.), or contain ill-conditioned terms ([3, 4, 9], e.g.). To overcome these shortcomings, some parameter-free filled functions ([13, 25], e.g.) and some filled functions without ill-conditioned terms ([26, 27], e.g.) are proposed; however, they are usually nondifferentiable, which often results in additional local minimizers. And some continuously differentiable filled functions with one parameter have been proposed ([8], e.g.), but the parameter is not easy to adjust. To deal with this problem, a new class of filled function with one parameter, which is continuously differentiable and the parameter is easy to adjust, is proposed in this paper. Based on this, a new filled function method with randomly and uniformly local search scheme is proposed, and the algorithm is numerical stability. In addition, the proposed method can be used to solve the multidimensional multi-modal problem.

2. Overview of the FFM

In this paper, we consider the following global optimization problem: where is a twice continuously differentiable function. Suppose satisfies the condition as . Then there exists a closed bounded domain called an operating region that contains all local minimizers of . Then the global optimization Problem can be rewritten into an equivalent form as follows:

Because can be estimated before Problem is solved, so we can assume that is known without loss of generality. We only consider Problem in the following.

2.1. Basic Concepts and Assumptions

In this paper, we adopt the following symbols: : the iteration number; : the initial point in the th iteration; : the local minimizer of the objective function in the th iteration; : the function value at ; : the basin of at an isolated local minimizer ; : the global minimizer of the objective function.

Assumption 1. The function in is continuously differentiable in , has only a finite number of minimizers in , and therefore every minimizer is an isolated minimizer.

The basin of at a local minimizer is defined in [3, 4] as a connected domain, it contains , and the steepest descent trajectory of converges to from any initial point in . The minimal radius of at an isolated minimizer is defined as

Radius is not zero if is positive definite. The basin of at is said to be lower than another basin of at another local minimizer if and only if . The hill of at is the basin of at its isolated minimizer .

The definition of the filled function was first proposed by Ge in [3] as follows.

Definition 2. Suppose is a current local minimizer of . is said to be a filled function of at , if it satisfies the following properties:(i) is a strictly maximizer of , and the whole basin of at becomes a part of a hill of ;(ii) has no minimizers or stable points in any basin of higher than ;(iii) if has a lower basin than , then there is a point in such a basin that minimizes on the line through and .

Based on the filled functions, a global optimization problem can be solved via a two-phase cycle.

In Phase 1, we start from an initial point and use any local minimization method to find a local minimizer of .

In Phase 2, we construct a filled function at and minimize the filled function in order to identify a point with . If such a point is found, is certainly in a lower basin than . We can then use as an initial point in Phase 1 again, and hence we can find a better local minimizer of with . This process repeats until no better solution can be found. The final local minimum will be a global minimizer of .

2.2. Overview of the FFM

As a deterministic yet a universal global optimization technique, the development of the FFM undergoes the following generations. The representative examples of the FFM in the first generation are -functions [3] and -functions [4] given by

The first-generation filled functions share a common feature: there are two adjustable parameters, and , which greatly affect the performance of the algorithms and need to be appropriately adjusted. However, how to adjust the parameters is a very difficult task. Due to these drawbacks, the second-generation filled functions were proposed, and they have only one parameter. Among them, the representative examples are -function [4] and -function [29]: where the adjustable parameter is .

Consider where and ; and stand for the set of local minimizers of and global minimizers of , respectively.

These filled functions only have one parameter than those in the first generation. However, these functions are liable to be ill-conditioned in practice since their function values increase exponentially due to an exponential function. As the adjustable parameter becomes larger and larger, which is required by the FFM itself, the rapidly increasing exponential function value may result in an overflow in the computation. To overcome this drawback, -function was proposed by Liu [27]:

The -function retains the advantage of the -function with only a single parameter and without exponential terms. The performance of the -function in numerical experiments for a large set of testing functions was quite satisfactory [27]. can be regarded as the third generation due to the absence of the exponential term. Nevertheless, the -function has a drawback which is discontinuous at .

However, on the one hand, the continuity and differentiability on the FFM are required for the convergence analysis in theory [15]. On the other hand, computationally, most local minimization algorithms for the numerical nonlinear programming require the gradient information in their procedures (readers can refer to [30] or [31] for detail). Thus, it is very necessary to develop a continuously differentiable filled function with as few parameters as possible. There was already some work in this area ([8], e.g.), but the parameters of the filled functions are not easy to adjust. Based on this consideration, a continuously differentiable filled function with one parameter is designed, and the parameter, which can be as large as possible, is relatively easy to adjust.

3. A New Filled Function and Its Properties

Definition 2 relies on the concept of the basin and hill of , which requires that the minima in the operating region are isolated, and Definition 2 also requires that there exists a minimal point of along a line. This is more difficult to be guaranteed. Therefore, many improvements in the definition are given in the literatures ([6, 14, 28], e.g.), which make it more convenient to construct a new filled function. In this section, we use the definition in [28] for Problem .

Definition 3. Suppose is a current local minimizer of . is said to be a filled function of at , if it satisfies the following properties:(i) is a strictly maximizer of ;(ii) for any , one has , and ;(iii) if is not empty, then there exists a point such that is a local minimizers of .

Note that Definition 3 about the filled function is different from the definition mentioned in [3]. It is much easier to construct a new filled function by Definition 3, and the local optimal solution of the filled function can be easily found. For example, suppose that is not a global minimizer, then, by condition (iii) of Definition 3, we can find a point by minimizing . Therefore, we can obtain a lower local minimizer of by searching starting at via local search schemes. In the process of minimizing , it does not require, unlike Definition 2, that must be on the straight line through and . So the design of the filled function is much easier and flexible.

In order to find a global minimizer of , the major issue of the filled function method is to find a lower minimizer of or justify whether the obtained local minimizer is a global minimizer of . This heavily relies on the performance of the filled function used.

In this section, we propose a new filled function for Problem at a local minimizer as follows: where is an adjustable positive real number as large as possible, used as the weight factor.

Note that the proposed filled function has some advantages: first, it has one parameter which is a positive real number as large as possible, thus it is easy to adjust, and, second, it is continuously differentiable, which makes it more easily solved by the existing local optimization method; finally, is bounded, which ensures that the calculation of will not overflow and is of numerical stability. The following theorems show that satisfies Definition 3.

Theorem 4. Suppose that is a local minimizer of , then is a strictly local maximizer of .

Proof. Since is a local minimizer of , then there exists a small positive real number , and a neighborhood , such that, for all , when , then , and . Thus, is a strictly local maximizer of .

Theorem 4 clearly shows that satisfies the property (i) of Definition 3.

Theorem 5. Suppose Assumption 1 is satisfied, is a local minimizer of , and for any , one has .

Proof. For any , , and for , one has . Consequently, .

Theorem 6. Suppose Assumption 1 is satisfied, is a local minimizer of , and is not empty; then there exists a point , such that is a local minimizer of .

Proof. Let , ; then , where is the boundary of (and ). Since is continuous and both and are contained in , then both and are bounded closed sets. For any , Since is bounded closed set, then there exists , such that Note that, for any , , and Since is not empty, there exists a point such that when for example, ; one has . In other words, when , at least there exists a point , such that . Since is continuous on closed bounded set , must have a global minimizer on for this fixed . Note that , and is a global minimizer of on (and ), then , and thus , that is, is also a global minimizer of on ; The proof is completed.

Theorems 46 state that the proposed filled function satisfies the conditions of Definition 3.

4. Filled Function Algorithm

4.1. A New Local Search Method: RULS

Conventional local optimization method minimizes function directly from the initial point , and then a local minimizer is obtained. In the process, the function is called repeatedly when the derivative and the search direction are calculated; in addition, the number of function evaluations is also increased in the one-dimensional search process. More importantly, with the increase of dimension, the number of iterations and the number of function evaluations are also increasing. Thus, the amount of computation for the traditional local search methods is very large, which would affect the computational efficiency. In this subsection, a new local search strategy called randomly and uniformly local search (RULS) is given, which can make the initial point closer to the local optimal solution, thereby the number of iterations and the number of function evaluations will be reduced, and the convergence speed will be accelerated.

Algorithm RULS. The details for RULS are as follows.

Step 1. is an initial point for the th iteration, and is the set of all points that have been used so far. Calculate , , where denotes the elements number of the set .

Step 2. is a neighborhood at , where , , and .Set , and .For to . If then    Endif  If then    EndifEndfor

Randomly and uniformly select points in , where is the dimension of the problem, and is the number of points selected in each dimension, .

Step 3. Calculate the function value of these points to find the points corresponding to the smallest function value, where

Let as a set of the initial points.

Step 4. Starting from any point , minimize by using a local optimization method to obtain a local minimizer sequence and a local minimum value sequence , set .

Step 5. Calculate to obtain a local minimizer , .

Explanation of Algorithm RULS. uniformly and randomly generate some points near the initial point , select one of the points corresponding to the smallest function value as the new initial point to minimize the function , and obtain a local minimizer . Diagram is shown in Figures 4(d1), 4(d2), and 4(d3).

4.2. A New Filled Function Algorithm (NFFA)

Based on the results of the previous content, a new filled function algorithm is proposed as follows.

Algorithm 1 (NFFA)

Step 1. (initialization step)(a) Choose a tolerance , for example, .(b) Choose a large integer constant , a positive real number as large as possible , and a small constant .(c) Choose a positive integer .(d) Set , and .

Step 2. Randomly and uniformly select points in the operating region, where , , , is the dimension of the problem, and is the number of points selected in each dimension. Calculate the function value of these points, and find the points corresponding to the smallest function value . Select for all .

Step 3. Starting from , minimize by using RULS to obtain a local minimizer ; then go to Step 6.

Step 4. Construct a filled Function (7) at the local minimizer , and Go to Step 5.

Step 5. Set , use as the initial point to minimize by using RULS, and find minimizers of . Let and , and then go to Step 3.

Step 6 (termination step). If and , ( ) or , the algorithm will stop, and is taken as a global minimizer of ; otherwise, increase , and go to Step 4.

Some Explanations about the above Filled Function Algorithm(1) In minimization to and , a local optimization method has to be selected firstly. In our algorithm, the MATLAB function “fmincon” is used.(2)In Step 5, the smaller has to be chosen accurately, in the algorithm; the is selected to guarantee that is greater than a threshold.(3) Step 5 means that if local minimizer of is found in with , we can use as the initial point to minimize and obtain a better local minimizer of .(4) In Step 6, it is necessary to increase , because the lower local minimizer may not be found when is not large enough.

5. Numerical Experiments

5.1. Test Problems

In this section, the proposed algorithm is tested on Problem 1 and Problems 29 taken from [14].

Problem 1 (one-dimensional function). We have The global minimizer is , and the global optimal value is .

Problem 2 (two-dimensional function). We have where , and . The global minimum value is for all .

Problem 3 (three-hump back camel function). We have The global minimizer is , and the global optimal value is .

Problem 4 (six-hump back camel function). We have The global minimizers are and , and the global optimal value is .

Problem 5 (Treccani function). We have This problem has two global minimizers in total, which are and , and the global optimal value is .

Problem 6 (Goldstein and Price function). We have where , and .

The global minimizer is and the global optimal value is .

Problem 7 (two-dimensional Shubert function). We have This problem has two global minimizers in total, which are and . The global minimum value is .

Problem 8 (Shekel’s function). We have where the coefficients , , and , , and , are given in Table 1. The global minimizer is and the global optimal value is .

Problem 9 ( -dimensional function). We have where . The global minimizer of this problem is for all .

5.2. The Experimental Results

In the following, the procedure for solving Problem 1 is given, respectively, by using the traditional filled function method and the proposed algorithm, and a comparison of these two algorithms is also given.

5.2.1. Conventional Filled Function Method for Solving Problem 1

The procedure of the conventional filled function method is as follows.(1) Randomly select point in the operating region ; then minimize from by using local minimization function “fmincon” to obtain a local minimizer .(2) Construct a filled Function (7) at , where (3) Minimize to obtain a local minimizer of ; the specific processes of Steps , , and are shown in Figure 1. Note that Figure 1(a′) is to enlarge the rectangular box in Figure 1(a), behind Figures 2(b′) and 3(c′), and so on, also said this, therefore, not to repeat explained.(4) Minimize from to obtain a local minimizer of . Construct a filled function at , where , and then minimize the filled function to obtain a local minimizer of (as shown in Figure 2).(5) Minimize from to obtain a local minimizer . Construct a filled function at , where , which is shown in Figure 3, and then minimize . Without any stable point is found. Then the iteration terminates, and set . The global minimizer and the global minimum value are obtained.

Conventional filled function algorithm is run independently 50 times; the mean number of function evaluations is 18.7119.

5.2.2. The Proposed Filled Function Algorithm for Solving Problem 1

The procedure of the proposed filled function algorithm for solving Problem 1 is as follows.(1) Uniformly and randomly generate three points , , and in the operating region, set , and calculate the function value of the three points. Find the point corresponding to the smallest function value, as shown in (d1) of Figure 4.(2) Calculate by using the rule in RULS, where , ; let , and then Randomly select three points in , calculate their value of to find the point corresponding to the smallest function value, as shown in (d2) of Figure 4, and set .(3) Minimize from by using a local optimization method to find a local minimizer and a minima , as shown in (d3) of Figure 4, and set .Steps and describe the specific process of RULS, as shown in (d2) and (d3) of Figure 4.(4) Construct a filled Function (7) at , as shown in Figure 5, where , (5) Optimize the function at by using RULS method to obtain a minimizer of , as shown in (d4) of Figure 4. Set and , and then minimize by using RULS to obtain and .(6) Construct a filled function at , where , as shown in Figure 6.(7) Function without a stable point in , so the iteration terminates. Set . The global minimizer and the global minimum value are obtained.

The proposed algorithm runs independently 50 times; the mean number of function evaluations is 13.6168, which is significantly less than the former result that uses conventional filled function method.

Both experimental results of Sections 5.2.1 and 5.2.2 and the mean number of function evaluations show that the proposed filled function method is more efficient.

5.2.3. The Proposed Filled Function Method for Solving the Problems Taken from [14]

The proposed algorithm is executed on the above test Problems 29. The results obtained by the proposed algorithm and the comparison with [14] are listed in Tables 2, 3, and 4.

The symbols used in Tables 2, 3, and 4 are given as follows: .: the number of the test problems; : the dimension of the test problems; : the parameter of the filled function; : the iteration number; : the total number of function and gradient evaluations of and ; : the local minimizer of the objective function in the th iteration; : the function value at ; : the global minimizer of the objective function;times: the once running time (second); -mean: the mean function value, respectively, in the 50 runs; -best: the best function value, respectively, in the 50 runs; -std: the standard deviation of function value, respectively, in the 50 runs;ratio: the ratio of the successful runs, that is, the ratio of the runs finding the true optimal solution in the 50 runs.

It can be seen from Tables 2, 3, and 4 that the proposed algorithm has the following advantages.

(1) More Minimizers Are Obtained. In the proposed algorithm, the initial points are randomly and uniformly generated. This is helpful to get more different global minimizers in the 50 runs. For example, for Problems 2, 5, and 7, more optimal solutions are obtained by the proposed algorithm than those obtained by the algorithm in the literature [14].

(2) More Effective and Efficient. The optimal solutions of all test problems can be found by the proposed algorithm. This indicates the effectiveness of the proposed algorithm. In addition, it can be seen from Table 3 that both the mean number of function evaluations and the once running time obtained by the proposed algorithm are smaller than that obtained by the algorithm in [14]. This also indicates the efficiency of the proposed algorithm.

(3) More Stability. It can be seen from column ratio in Table 2 that the ratio of the successful runs is more than 95.92%, and most of them are 100%, which shows that the proposed algorithm is stable. In addition, it can be seen from columns -mean and -std that difference between -mean and -best is small, and -std is also small. This also indicates that the proposed algorithm is stable and robust to the initial points and parameter variation.

(4) Applicable to Multidimensional Problems. In Table 4, Problem 9 with different dimensions is tested, and the numerical results indicate that the proposed algorithm is suitable for solving multidimensional problems.

6. Conclusions

The filled function method is an approach to find the global minima of multi-modal functions. The existing filled functions have some drawbacks such as being nondifferentiable at some point in search domain, including the exponential term, containing more than one adjustable parameter, and being discontinuous. In this paper, a filled function with one parameter is designed, which is continuously differentiable, and it can overcome the existing shortcomings. Based on this, a new filled function method with randomly and uniformly local search scheme is proposed, and the algorithm is numerically stable. The results of the computer simulations indicate that the proposed filled function method is effective and efficient.

Acknowledgment

The paper is supported by The National Natural Science Foundation of China (no. 61272119).