Abstract

A global optimization algorithm for solving generalized geometric programming (GGP) problem is developed based on a new linearization technique. Furthermore, in order to improve the convergence speed of this algorithm, a new pruning technique is proposed, which can be used to cut away a large part of the current investigated region in which the global optimal solution does not exist. Convergence of this algorithm is proved, and some experiments are reported to show the feasibility of the proposed algorithm.

1. Introduction

This paper considers generalized geometric programming GGP problem in the following form: where .

Generally speaking, GGP problem is a non convex programming problem, which has a wide variety of applications, such as in engineering design, economics and statistics, manufacturing, and distribution contexts in risk management problems [14].

During the past years, many local optimization approaches for solving GGP problem have been presented [5, 6], but global optimization algorithms based on the characteristics of GGP problem are scarce. Maranas and Floudas [7] proposed a global optimization algorithm for solving GGP problem based on convex relaxation. Shen and Zhang [8] presented a method to globally solve GGP problem by using linear relaxation. Recently, several branch and bound algorithms have been developed [9, 10].

The purpose of this paper is to introduce a new global optimization algorithm for solving GGP problem. In this algorithm, by utilizing the special structure of GGP problem, a new linear relaxation technique is presented. Based on this technique, the initial GGP problem is systematically converted into a series of linear programming problems. The solutions of these converted problems can approximate the global optimal solution of GGP problem by successive refinement process.

The main features of this algorithm are: (1) a new linearization technique for solving GGP problem is proposed, which applies more information of the functions of GGP problem, (2) the generated relaxation linear programming problems are embedded within a branch and bound algorithm without increasing the number of variables and constraints, (3) a new pruning technique is presented, which can be used to improve the convergence speed of the proposed algorithm, and (4) numerical experiments are given, which show that the proposed algorithm can treat all of the test problems in finding global optimal solution within a prespecified tolerance.

The structure of this paper is as follows. In Section 2, first, we construct the lower approximate linear functions for the objective function and the constraint functions of GGP problem; then, we derive the relaxation linear programming (RLP) problem of GGP problem; finally, to improve the convergence speed of our algorithm, we present a new pruning technique. In Section 3, the proposed branch and bound algorithm is described, and the convergence of the algorithm is established. Some numerical results are reported in Section 4.

2. Linear Relaxation and Pruning Technique

The principal structure in the development of a solution procedure for solving GGP problem is the construction of lower bounds for this problem, as well as for its partitioned subproblems. A lower bound of GGP problem and its partitioned subproblems can be obtained by solving a linear relaxation problem. The proposed strategy for generating this linear relaxation problem is to underestimate every nonlinear function with a linear function. In what follows, all the details of this procedure will be given.

Let represents either the initial box , or modified box as defined for some partitioned subproblem in a branch and bound scheme.

Consider term in . Let , then, we have

From (2.1), we can obtain the lower bound and upper bound of as follows:

To derive the linear relaxation problem, we will use a convex separation technique and a two-part relaxation technique.

2.1. First-Part Relaxation

Let , then can be expressed in the following form: For , we can derive its gradient and the Hessian matrix: Therefore, by (2.4), the following relation holds: Let , then, for all , we have Thus, the function is a convex function on . Consequently, the function can be decomposed into the difference of two convex functions, that is, is a d.c. function, which admits the following d.c. decomposition: where Let . Since is a convex function, we have In addition, for , it is not difficult to show Furthermore, we can obtain Since , it follows that Thus, from (2.7), (2.9), and (2.12), we have Hence, by (2.1), (2.3), and (2.13), the first-part relaxation of can be obtained as follows

2.2. Second-Part Relaxation

Consider the function on the interval . As we all know, its linear lower bound function and upper bound function can be derived as follows: where .

In , let Then, from (2.14) and (2.15),we can obtain the linear lower bound function of denoted by as follows: where Obviously, .

Consequently, the corresponding approximation relaxation linear programming (RLP) problem of GGP problem on can be obtained

Theorem 2.1. Let . Then, for all , the difference of and satisfies as , .

Proof. For all , let and let . Then, it is obvious that we only need to prove To this end, first, we consider the difference . By (2.7), (2.13), and (2.14), it follows that where is a constant vector, and satisfies .
By the definition of , we have as . Thus, we have as .
Second, we consider the difference . From the definitions of and , it follows that
By [8], we know that and , as . Thus, we have as .
Taken togethe the above, it implies that and this completes the proof.

From Theorem 2.1, it follows that will approximate the function as .

Based on the above discussion, it can be seen that the optimal value of RLP problem is smaller than or equal to that of GGP problem for all feasible points, that is, the optimal value of RLP problem provides a valid lower bound for the optimal value of GGP problem. Thus, for any problem (P), let us denote the optimal value of (P) by , then we have .

2.3. Pruning Technique

In order to improve the convergence speed of this algorithm, we present a new pruning technique, which can be used to eliminate the region in which the global optimal solution of GGP problem does not exist.

Assume that UB is current known upper bound of the optimal value of GGP problem. Let

Theorem 2.2. For any subrectangle with . Let If there exists some index such that and , then there is no globally optimal solution of GGP problem on ; if and , then there is no globally optimal solution of GGP problem on , where

Proof. First, we show that for all , . When , consider the th component of . Since , it follows that Note , we have . From the definition of and the above inequality, we obtain This implies that, for all , . In other words, for all , is always greater than the optimal value of GGP problem. Therefore, there is no globally optimal solution of GGP problem on .
For all , if and with some , by arguments similar to the above, we can derive that there is no globally optimal solution of GGP problem on .

3. Algorithm and Its Convergence

In this section, based on the former relaxation linear programming (RLP) problem, a branch and bound algorithm is presented to globally solve GGP problem. In order to ensure convergence to the global optimal solution, this algorithm needs to solve a sequence of (RLP) problems over partitioned subsets of .

In this algorithm, the set will be partitioned into subrectangles. Each subrectangle is concerned with a node of the branch and bound tree, and is associated with a relaxation linear subproblem.

At stage of the algorithm, suppose that we have a collection of active nodes denoted by . For each node , we will have computed a lower bound for the optimal value of GGP problem via solution of RLP problem, so that the lower bound of the optimal value of GGP problem on the whole initial box region at stage is given by . Whenever the solution of RLP problem for any node subproblem turns out to be feasible to GGP problem, we update the upper bound UB if necessary. Then, for each stage , the active nodes collection will satisfy , for all . We now select an active node to partition its associated rectangle into two subrectangles according to the following branching rule. For such two subrectangles, the fathoming step is applied in order to identify whether the subrectangles should be eliminated. In the end, we obtain a collection of active nodes for the next stage. This process is repeated until convergence is obtained.

3.1. Branching Rule

As we all know, the critical element in guaranteeing convergence to the global optimal solution is the choice of a suitable partitioning strategy. In this paper, we choose a simple and standard bisection rule. This rule is sufficient to ensure convergence since it drives all the intervals to a singleton for all variables. Consider any node subproblem identified by rectangle

The branching rule is as follows.(a)Let .(b) Let satisfy .(c)Let Through this branching rule, the rectangle is partitioned into two subrectangles and .

3.2. Algorithm Statement

Based on the former results, the basic steps of the proposed global optimization algorithm are summarized as follows. Let refer to the optimal value of RLP problem on the rectangle .

Step 1. Choose . Find an optimal solution and the optimal value for RLP problem with . Set . If is feasible to GGP problem, then update the upper bound . If , then stop: is a global ?-optimal solution of GGP problem. Otherwise, set .

Step 2. Set . Subdivide into two rectangles via the branching rule. Let .

Step 3. For each new subrectangle , utilize the pruning technique of Theorem 2.2 to prune box . Update the corresponding parameters . Compute and find an optimal solution for RLP problem with , where . If possible, update the upper bound , and let denote the point which satisfies .

Step 4. If , then set .

Step 5. .

Step 6. Set .

Step 7. Set , and let satisfy . If , then stop: is a global -optimal solution of GGP problem. Otherwise, set , and go to Step 2.

3.3. Convergence of the Algorithm

The convergence properties of the algorithm are given in the following theorem.

Theorem 3.1. (a) If the algorithm is finite, then upon termination, is a global -optimal solution of GGP problem.
(b) If the algorithm is infinite, then, along any infinite branch of the branch and bound tree, an infinite sequence of iterations will be generated, and any accumulation point of the sequence will be a global solution of GGP problem.

Proof. (a) If the algorithm is finite, then it terminates in some stage . Upon termination, by the algorithm, it follows that . From Steps 1 and 3, this implies that . Let denote the optimal value of GGP problem, then, by Section 2, we know that . Since is a feasible solution of GGP problem, .
Taken together above, this implies that . Therefore, , and the proof of part (a) is complete.
(b) Let denote the feasible region of GGP problem. When the algorithm is infinite, from the construction of the algorithm, we know that is a nondecreasing sequence bounded above by , which guarantees the existence of the limit . Since is contained in a compact set , there exists one convergent subsequence , and suppose . Then, from the proposed algorithm, there exists a decreasing subsequence , where , and . According to Theorem 2.1, we have
The only remaining problem is to prove that . Since is a closed set, it follows that . Assume that , then there exists some such that . Since is continuous, by Theorem 2.1, the sequence will convergent to . By the definition of convergence, there exists , such that for any , . Therefore, for any , we have , which implies that is infeasible. This contradicts the assumption of . Therefore, , that is, is a global solution of GGP problem, and the proof of part (b) is complete.

4. Numerical Experiment

In this section, we report some numerical results to verify the performance of the proposed algorithm. The test problems are implemented on a Pentium IV (1.66?GHZ) microcomputer. The algorithm is coded in Matlab 7.1 and uses simplex method to solve the relaxation linear programming problems. In our experiments, the convergence tolerance set to .

Example 4.1 (see [8]). We have the following:
By using the method in this paper, the optimal solution is , the optimal value is -147.6667, and the number of algorithm iterations is 557. But using the method in [8], the optimal solution is , the optimal value is -83.249728406, and the number of algorithm iterations is 1829.

Example 4.2 (see [10]). We have the following:
Through using the method in this paper, the optimal solution (0.5, 0.5) with optimal value 0.5 is found after 16 iterations. But using the method in [10], the optimal solution (0.5, 0.5) with optimal value 0.5 is found after 96 iterations.

Example 4.3. We have the following:
By utilizing the method in this paper, the optimal value -1.3501 is found after 14 iterations at an optimal solution (0.5, 1.5).

Example 4.4. We have the following:
Utilizing the method in this paper, we find the optimal value 1.1935 after 15 iterations at an optimal solution .
In the future work, we will do more numerical experiments to test the performance of the proposed algorithm.

Acknowledgment

This paper is supported by the National Natural Science Foundation of China (60974082), and the Fundamental Research Funds for the Central Universities (K50510700004).