Research Article | Open Access

Volume 2010 |Article ID 843609 | https://doi.org/10.1155/2010/843609

Weixiang Wang, Youlin Shang, Ying Zhang, "Finding Global Minima with a Filled Function Approach for Non-Smooth Global Optimization", Discrete Dynamics in Nature and Society, vol. 2010, Article ID 843609, 10 pages, 2010. https://doi.org/10.1155/2010/843609

# Finding Global Minima with a Filled Function Approach for Non-Smooth Global Optimization

Academic Editor: Elena Braverman
Received12 Oct 2009
Accepted05 Feb 2010
Published08 Mar 2010

#### Abstract

A filled function approach is proposed for solving a non-smooth unconstrained global optimization problem. First, the definition of filled function in Zhang (2009) for smooth global optimization is extended to non-smooth case and a new one is put forwarded. Then, a novel filled function is proposed for non-smooth the global optimization and a corresponding non-smooth algorithm based on the filled function is designed. At last, a numerical test is made. The computational results demonstrate that the proposed approach is effcient and reliable.

#### 1. Introduction

Because of advances in science, economics and engineering, studies on global optimization for multi-minimum nonlinear programming problem have become a topic of great concern. There are two difficulties faced by global optimization, one is how to leave the current solution for a better one, another is how to decide the current solution is a global one. So far, most existing methods deal only with the first issue. Among these methods, the filled function method is a practical useful tool for global optimization. It was first put forwarded by  for smooth unconstrained global optimization. The idea behind the filled function methods is to construct an auxiliary function that allows us to escape from a given local minimum of the original objective function. It consists of two phase: local minimization and filling. The two phases are used alternately until a global minimizer of is found. The method has been further developed by . In practical problems, however, objective functions are not always smooth, so several scholars have extended the filled function method for smooth global optimization to non-smooth cases(see ). In this paper, we modify the concept of filled function presented by  and propose a novel class of filled function for non-smooth the global optimization. This paper is divided into 6 sections. The next section presents some non-smooth preliminaries. In Section 3, the modified concept of the filled function for non-smooth global optimization is introduced, a novel class of filled function is given and its properties are investigated. In Section 4, a filled function algorithm is proposed. Section 5 presents some encouraging numerical results. Last, in Section 6, the conclusion is given.

#### 2. Non-Smooth Preliminaries

To introduce the concept of the filled function approach for non-smooth global optimization, we recall some definitions and lemmas on non-smooth optimization which would be used in the next section.

Definition 2.1. Let be a subset of A function is said to be Lipschitz continuous with a constant on provided that, for some scalar one has for all points

Definition 2.2 (see ). Let be Lipschitz with constant at the point the generalized gradient of at is defined as where is the generalized directional derivative of in the direction at

Lemma 2.3 (see ). Let be Lipschitz with constant at the point then (a) is finite, sublinear and satisfies (b)As a function of is super-semicontinuous; as a function of it is Lipschitz with constant (c), for (d)is a nonempty compact convex set, and to any , one has .(e), .

Lemma 2.4 (see ). If is a local minimizer of then

#### 3. A New Filled Function and Its Properties

Consider problem. To begin with, this paper makes the following assumptions.

Assumption 3.1. is Lipschitz continuous with a constant on

Assumption 3.2. is coercive, that is, as

Note that Assumption 3.2 implies the existence of a compact set whose interior contains all minimizers of We assume that the value of for located on the boundary of is greater than the value of for any inside Then the original problem is equivalent to problem

Assumption 3.3. has only a finite number of different minimal function values in

Let be a local minimizer of In , the filled function for smooth global optimization was defined as follows.

Definition 3.4. A function is called a filled function of at a local minimizer , if has the following properties: (1) is a strict maximizer of .(2) has no stationary points in the region .(3)If is not a global minimizer of , then has at least one minimizer in the region .

This paper extends about definition to non-smooth case and gives the following definition of a filled function.

Definition 3.5. A function is called a filled function of at a local minimizer , if has the following properties: (1) is a strict maximizer of .(2)One has , for any .(3)If is not a global minimizer of , then has at least one minimizer in the region .

For convenience, we use and to denote the set of local minimizers and the set of global minimizers of problem , respectively.

In what follows, we first design a function satisfying the following conditions:

(1)(2) (where ),(3)(4) for any

Some examples of the function with the properties 1–4 are , , .

Now, a filled function with two parameters for non-smooth global optimization is constructed as follows

where and are parameters, satisfies where

Next, we will show that the function is a filled function satisfying Definition 3.5.

Theorem 3.6. Let . If is large enough such that , then is a strict maximizer of .

Proof. Since , there exists a neighborhood of with such that for all and By the mean value theorem, it follows that where
By the property (3) of , when is sufficiently large such that
we have that Therefore we obtain that Hence, is a strict maximizer of .

Theorem 3.7. Assume that To any if is large enough such that , where then one has In other words, is not a stationary point of

Proof. We first note that for any one has and
Denoting , for any , there exists such that

So, to any , one has . Then

Theorem 3.8. Assume that Then there exists a point such that is a minimizer of .

Proof. Since , then there exists a point such that . Now, by the choice of parameter of , one has so that there exists at least one point , such that It follows that . On the other hand, by the definition of we have Therefore, we conclude for all which implies that is a minimizer of .
Theorem 3.6–3.3 state clearly that the proposed filled function satisfies the properties 1–3 of Definition 3.5.

Theorem 3.9. Suppose that and (a)If there exists a constant such that , then, for sufficiently large one has (b)If there exists a constant such that then for sufficiently large it holds

Proof. Let , that is . For simplicity, let
(a) In this case, we can see that
since and
Therefore, for large there exists
It follows that .
(b) If and is sufficiently large, then
Thus, we have .
If but then
Therefore, .

#### 4. Solution Algorithm

In this section, we state our algorithm(NFFA) for non-smooth global optimization based on the previous proposed filled function.

Algorithm NFFA
Initialization Step:
() Set a disturbance
(2) Choose an upper bound of , for example, set .
(3) Set
() Choose directions , where , is the number of variables.
(5) Specify an initial point to start phase 1 of the algorithm.
(6) Set
(7) Set .
Main Step
() Starting from activate a non-smooth local minimization procedure to minimize and find its local minimizer
(2) Let
(3) Construct the filled function as follows:

() else If    then  go to 6.
Use as an initial point, minimize the filled function problem by implementing a non-smooth local minimization procedure and obtain a local minimizer .
(5) If   satisfies then  set and . Use point as a new initial point, minimize problem by implementing a local search procedure and obtain another local minimizer of such that , set , go to 2; Otherwise, set , go to 4.
(6) Increase by setting .
(7) If   then  set , go to 3; else the algorithm is incapable of finding a better local minimizer, the algorithm stops and is taken as a global minimizer.

The motivation and mechanism behind this algorithm are explained below.

In Step of the Initialization step, we choose direction as positive and negative unit coordinate vectors, where . For example, when , the directions can be chosen as

In Steps , 4 and 5 of the Main step, we minimize problem by applying non-smooth local optimization algorithms, such as Hybrid Hooke and Jeeves-Direct Method for Non-smooth Optimization, Mesh Adaptive Direct Search Algorithms for Constrained Optimization , Bundle methods, Powell's method, and so forth. In particular, the Hybrid Hooke and Jeeves-Direct Method is more preferable to others, since it is guaranteed to find a local minimum of a non-smooth function subject to simple bounds.

Recall from Theorems 3.7 and 3.8 that the value of should be selected sufficiently large. In Main Step , we first set then it is gradually increased until it reaches the preset upper bound If the parameter exceeds and we cannot find a point such that , then we believe that there does not exist a better local minimizer of problem , the current local minimizer is taken as a global minimizer and the algorithm is terminated.

#### 5. Numerical Experiment

In this section, we apply the above algorithm to several test problems to demonstrate its efficiency. All the numerical experiments are implemented in Fortran 95, under Windows XP and Pentium (R) 4 CPU 2.80 GMHZ. In our programs, the filled function is of the form

In non-smooth case, we obtain a local minimizer by using the Hybrid Hooke and Jeeves-Direct Method. In smooth case, we apply the PRP Conjugate Gradient Method to get the search direction and the Armijo line search to get the step size. The numerical results prove that the proposed approach is efficient.

Problem 5.1. The global minimum solution: and . In this experiment, we used an initial point The algorithm can successfully obtain the global minimizer. The time to reach the global minimizer is 21.7842 seconds. The numbers of the filled function and the original objective function being calculated in the algorithm are 953 and 1167, respectively.

Problem 5.2. The global minimum solution: and . In this experiment, we used an initial point The algorithm can successfully obtain the global minimizer. The time to reach the global minimizer is 23.9746 seconds. The numbers of the filled function and the original objective function being calculated in the algorithm are 8195 and 9479, respectively.

Problem 5.3. The global minimum solution: and . In this experiment, we used an initial point The algorithm can successfully obtain the global minimizer. The time to reach the global minimizer is 28.5745 seconds. The numbers of the filled function and the original objective function being calculated in the algorithm are 1986 and 2488, respectively.

Problem 5.4. For any , the global minimum solution: and . In this experiment, we considered and used as an initial point. The algorithm can successfully obtain the global minimizer. The time to reach the global minimizer is 93.6783 seconds. The numbers of the filled function and the original objective function being calculated in the algorithm are 7631 and 9739, respectively.

Problem 5.5. For any , the global minimum solution: and . In this experiment, we considered , and used as an initial point. The algorithm can successfully obtain the global minimizer. The time to reach the global minimizer is 149.5783 seconds. The numbers of the filled function and the original objective function being calculated in the algorithm are 9761 and 14264, respectively.

Problem 5.6. where For any the global minimum solution: and . In this experiment, we considered , and used as an initial point. The algorithm can successfully obtain the global minimizer. The time to reach the global minimizer is 172.8436 seconds. The numbers of the filled function and the original objective function being calculated in the algorithm are 12674 and 16774, respectively.

#### 6. Conclusions

In this paper, we first give a definition of a filled function for a non-smooth unconstrained minimization problem and construct a new filled function with two parameters. Then, we design an elaborate solution algorithm based on this filled function. Finally, we make a numerical test. The computational results suggest that this filled function approach is efficient. Of course, the efficiency of the proposed filled function approach relies on the non-smooth local optimization procedure. Meanwhile, from the numerical results, we can see that algorithm can move successively from one local minimum to another better one, but in most cases, we have to use more time to judge the current point being a global minimizer than to find a global minimizer. However, the global optimality conditions for continuous variables are still open problem, in general. The criterion of the global minimizer will provide solid stopping conditions for a continuous filled function method.

#### Acknowledgments

The authors thank anonymous referees for many useful suggestions, which improved this paper. This paper was partially supported by The NNSF of China under Grant (no. 10771162 and 10971053) and the NNSF of Henan Province (no. 084300510060 and 094300510050).

1. R. P. Ge, “A filled function method for finding a global minimizer of a function of several variables,” Mathematical Programming, vol. 46, no. 2, pp. 191–204, 1990.
2. P. M. Pardalos, H. E. Romeijn, and H. Tuy, “Recent developments and trends in global optimization,” Journal of Computational and Applied Mathematics, vol. 124, no. 1-2, pp. 209–228, 2000.
3. P. M. Pardalos and H. E. Romeijn, Eds., Handbook of Global Optimization. Vol. 22: Heuristic Approaches, vol. 62 of Nonconvex Optimization and Its Applications, Kluwer Academic Publishers, Dordrecht, The Netherlands, 2002. View at: MathSciNet
4. R. Horst and P. M. Pardalos, Eds., Handbook of Global Optimization, vol. 2 of Nonconvex Optimization and Its Applications, Kluwer Academic Publishers, Dordrecht, Netherlands, 1995. View at: MathSciNet
5. Z. Xu, H.-X. Huang, P. M. Pardalos, and C.-X. Xu, “Filled functions for unconstrained global optimization,” Journal of Global Optimization, vol. 20, no. 1, pp. 49–65, 2001.
6. Y. Yang and Y. Shang, “A new filled function method for unconstrained global optimization,” Applied Mathematics and Computation, vol. 173, no. 1, pp. 501–512, 2006.
7. X. Wang and G. Zhou, “A new filled function for unconstrained global optimization,” Applied Mathematics and Computation, vol. 174, no. 1, pp. 419–429, 2006.
8. L.-S. Zhang, C.-K. Ng, D. Li, and W.-W. Tian, “A new filled function method for global optimization,” Journal of Global Optimization, vol. 28, no. 1, pp. 17–43, 2004.
9. Q. Wu, S. Y. Liu, L. Y. Zhang, and C. C. Liu, “A modified filled function method for global minimization of a nonsmooth programming problem,” Mathematica Applicata, vol. 17, no. 2, pp. 36–40, 2004. View at: Google Scholar | MathSciNet
10. L. S. Zhang, “On the solving global optimization approach from local to global,” Journal of Chongqing, vol. 26, no. 1, pp. 1–6, 2009. View at: Google Scholar
11. H. F. Clark, Optimization and Non-Smooth Analysis, SIAM, Philadelphia, Pa, USA, 1990.
12. C. J. Price, B. L. Robertson, and M. Reale, “A hybrid Hooke and Jeeves—direct method for non-smooth optimization,” Advanced Modeling and Optimization, vol. 11, no. 1, pp. 43–61, 2009.
13. C. Audet and J. E. Dennis Jr., “Mesh adaptive direct search algorithms for constrained optimization,” SIAM Journal on Optimization, vol. 17, no. 1, pp. 188–217, 2006.