Abstract

A filled function approach is proposed for solving a non-smooth unconstrained global optimization problem. First, the definition of filled function in Zhang (2009) for smooth global optimization is extended to non-smooth case and a new one is put forwarded. Then, a novel filled function is proposed for non-smooth the global optimization and a corresponding non-smooth algorithm based on the filled function is designed. At last, a numerical test is made. The computational results demonstrate that the proposed approach is effcient and reliable.

1. Introduction

Because of advances in science, economics and engineering, studies on global optimization for multi-minimum nonlinear programming problem have become a topic of great concern. There are two difficulties faced by global optimization, one is how to leave the current solution for a better one, another is how to decide the current solution is a global one. So far, most existing methods deal only with the first issue. Among these methods, the filled function method is a practical useful tool for global optimization. It was first put forwarded by [1] for smooth unconstrained global optimization. The idea behind the filled function methods is to construct an auxiliary function that allows us to escape from a given local minimum of the original objective function. It consists of two phase: local minimization and filling. The two phases are used alternately until a global minimizer of is found. The method has been further developed by [28]. In practical problems, however, objective functions are not always smooth, so several scholars have extended the filled function method for smooth global optimization to non-smooth cases(see [9]). In this paper, we modify the concept of filled function presented by [10] and propose a novel class of filled function for non-smooth the global optimization. This paper is divided into 6 sections. The next section presents some non-smooth preliminaries. In Section 3, the modified concept of the filled function for non-smooth global optimization is introduced, a novel class of filled function is given and its properties are investigated. In Section 4, a filled function algorithm is proposed. Section 5 presents some encouraging numerical results. Last, in Section 6, the conclusion is given.

2. Non-Smooth Preliminaries

To introduce the concept of the filled function approach for non-smooth global optimization, we recall some definitions and lemmas on non-smooth optimization which would be used in the next section.

Definition 2.1. Let be a subset of A function is said to be Lipschitz continuous with a constant on provided that, for some scalar one has for all points

Definition 2.2 (see [11]). Let be Lipschitz with constant at the point the generalized gradient of at is defined as where is the generalized directional derivative of in the direction at

Lemma 2.3 (see [11]). Let be Lipschitz with constant at the point then (a) is finite, sublinear and satisfies (b)As a function of is super-semicontinuous; as a function of it is Lipschitz with constant (c), for (d)is a nonempty compact convex set, and to any , one has .(e), .

Lemma 2.4 (see [11]). If is a local minimizer of then

3. A New Filled Function and Its Properties

Consider problem. To begin with, this paper makes the following assumptions.

Assumption 3.1. is Lipschitz continuous with a constant on

Assumption 3.2. is coercive, that is, as

Note that Assumption 3.2 implies the existence of a compact set whose interior contains all minimizers of We assume that the value of for located on the boundary of is greater than the value of for any inside Then the original problem is equivalent to problem

Assumption 3.3. has only a finite number of different minimal function values in

Let be a local minimizer of In [10], the filled function for smooth global optimization was defined as follows.

Definition 3.4. A function is called a filled function of at a local minimizer , if has the following properties: (1) is a strict maximizer of .(2) has no stationary points in the region .(3)If is not a global minimizer of , then has at least one minimizer in the region .

This paper extends about definition to non-smooth case and gives the following definition of a filled function.

Definition 3.5. A function is called a filled function of at a local minimizer , if has the following properties: (1) is a strict maximizer of .(2)One has , for any .(3)If is not a global minimizer of , then has at least one minimizer in the region .

For convenience, we use and to denote the set of local minimizers and the set of global minimizers of problem , respectively.

In what follows, we first design a function satisfying the following conditions:

(1)(2) (where ),(3)(4) for any

Some examples of the function with the properties 1–4 are , , .

Now, a filled function with two parameters for non-smooth global optimization is constructed as follows

where and are parameters, satisfies where

Next, we will show that the function is a filled function satisfying Definition 3.5.

Theorem 3.6. Let . If is large enough such that , then is a strict maximizer of .

Proof. Since , there exists a neighborhood of with such that for all and By the mean value theorem, it follows that where
By the property (3) of , when is sufficiently large such that
we have that Therefore we obtain that Hence, is a strict maximizer of .

Theorem 3.7. Assume that To any if is large enough such that , where then one has In other words, is not a stationary point of

Proof. We first note that for any one has and
Denoting , for any , there exists such that

So, to any , one has . Then

Theorem 3.8. Assume that Then there exists a point such that is a minimizer of .

Proof. Since , then there exists a point such that . Now, by the choice of parameter of , one has so that there exists at least one point , such that It follows that . On the other hand, by the definition of we have Therefore, we conclude for all which implies that is a minimizer of .
Theorem 3.6–3.3 state clearly that the proposed filled function satisfies the properties 1–3 of Definition 3.5.

Theorem 3.9. Suppose that and (a)If there exists a constant such that , then, for sufficiently large one has (b)If there exists a constant such that then for sufficiently large it holds

Proof. Let , that is . For simplicity, let
(a) In this case, we can see that
since and
Therefore, for large there exists
It follows that .
(b) If and is sufficiently large, then
Thus, we have .
If but then
Therefore, .

4. Solution Algorithm

In this section, we state our algorithm(NFFA) for non-smooth global optimization based on the previous proposed filled function.

Algorithm NFFA
Initialization Step:
() Set a disturbance
(2) Choose an upper bound of , for example, set .
(3) Set
() Choose directions , where , is the number of variables.
(5) Specify an initial point to start phase 1 of the algorithm.
(6) Set
(7) Set .
Main Step
() Starting from activate a non-smooth local minimization procedure to minimize and find its local minimizer
(2) Let
(3) Construct the filled function as follows:

() else If    then  go to 6.
Use as an initial point, minimize the filled function problem by implementing a non-smooth local minimization procedure and obtain a local minimizer .
(5) If   satisfies then  set and . Use point as a new initial point, minimize problem by implementing a local search procedure and obtain another local minimizer of such that , set , go to 2; Otherwise, set , go to 4.
(6) Increase by setting .
(7) If   then  set , go to 3; else the algorithm is incapable of finding a better local minimizer, the algorithm stops and is taken as a global minimizer.

The motivation and mechanism behind this algorithm are explained below.

In Step of the Initialization step, we choose direction as positive and negative unit coordinate vectors, where . For example, when , the directions can be chosen as

In Steps , 4 and 5 of the Main step, we minimize problem by applying non-smooth local optimization algorithms, such as Hybrid Hooke and Jeeves-Direct Method for Non-smooth Optimization[12], Mesh Adaptive Direct Search Algorithms for Constrained Optimization [13], Bundle methods, Powell's method, and so forth. In particular, the Hybrid Hooke and Jeeves-Direct Method is more preferable to others, since it is guaranteed to find a local minimum of a non-smooth function subject to simple bounds.

Recall from Theorems 3.7 and 3.8 that the value of should be selected sufficiently large. In Main Step , we first set then it is gradually increased until it reaches the preset upper bound If the parameter exceeds and we cannot find a point such that , then we believe that there does not exist a better local minimizer of problem , the current local minimizer is taken as a global minimizer and the algorithm is terminated.

5. Numerical Experiment

In this section, we apply the above algorithm to several test problems to demonstrate its efficiency. All the numerical experiments are implemented in Fortran 95, under Windows XP and Pentium (R) 4 CPU 2.80 GMHZ. In our programs, the filled function is of the form

In non-smooth case, we obtain a local minimizer by using the Hybrid Hooke and Jeeves-Direct Method. In smooth case, we apply the PRP Conjugate Gradient Method to get the search direction and the Armijo line search to get the step size. The numerical results prove that the proposed approach is efficient.

Problem 5.1. The global minimum solution: and . In this experiment, we used an initial point The algorithm can successfully obtain the global minimizer. The time to reach the global minimizer is 21.7842 seconds. The numbers of the filled function and the original objective function being calculated in the algorithm are 953 and 1167, respectively.

Problem 5.2. The global minimum solution: and . In this experiment, we used an initial point The algorithm can successfully obtain the global minimizer. The time to reach the global minimizer is 23.9746 seconds. The numbers of the filled function and the original objective function being calculated in the algorithm are 8195 and 9479, respectively.

Problem 5.3. The global minimum solution: and . In this experiment, we used an initial point The algorithm can successfully obtain the global minimizer. The time to reach the global minimizer is 28.5745 seconds. The numbers of the filled function and the original objective function being calculated in the algorithm are 1986 and 2488, respectively.

Problem 5.4. For any , the global minimum solution: and . In this experiment, we considered and used as an initial point. The algorithm can successfully obtain the global minimizer. The time to reach the global minimizer is 93.6783 seconds. The numbers of the filled function and the original objective function being calculated in the algorithm are 7631 and 9739, respectively.

Problem 5.5. For any , the global minimum solution: and . In this experiment, we considered , and used as an initial point. The algorithm can successfully obtain the global minimizer. The time to reach the global minimizer is 149.5783 seconds. The numbers of the filled function and the original objective function being calculated in the algorithm are 9761 and 14264, respectively.

Problem 5.6. where For any the global minimum solution: and . In this experiment, we considered , and used as an initial point. The algorithm can successfully obtain the global minimizer. The time to reach the global minimizer is 172.8436 seconds. The numbers of the filled function and the original objective function being calculated in the algorithm are 12674 and 16774, respectively.

6. Conclusions

In this paper, we first give a definition of a filled function for a non-smooth unconstrained minimization problem and construct a new filled function with two parameters. Then, we design an elaborate solution algorithm based on this filled function. Finally, we make a numerical test. The computational results suggest that this filled function approach is efficient. Of course, the efficiency of the proposed filled function approach relies on the non-smooth local optimization procedure. Meanwhile, from the numerical results, we can see that algorithm can move successively from one local minimum to another better one, but in most cases, we have to use more time to judge the current point being a global minimizer than to find a global minimizer. However, the global optimality conditions for continuous variables are still open problem, in general. The criterion of the global minimizer will provide solid stopping conditions for a continuous filled function method.

Acknowledgments

The authors thank anonymous referees for many useful suggestions, which improved this paper. This paper was partially supported by The NNSF of China under Grant (no. 10771162 and 10971053) and the NNSF of Henan Province (no. 084300510060 and 094300510050).