## Nonlinear Analysis: Algorithm, Convergence, and Applications 2014

View this Special IssueResearch Article | Open Access

Xue-Ping Hou, Pei-Ping Shen, Yong-Qiang Chen, "A Global Optimization Algorithm for Signomial Geometric Programming Problem", *Abstract and Applied Analysis*, vol. 2014, Article ID 163263, 12 pages, 2014. https://doi.org/10.1155/2014/163263

# A Global Optimization Algorithm for Signomial Geometric Programming Problem

**Academic Editor:**Yisheng Song

#### Abstract

This paper presents a global optimization algorithm for solving the signomial geometric programming (SGP) problem. In the algorithm, by the straight forward algebraic manipulation of terms and by utilizing a transformation of variables, the initial nonconvex programming problem (SGP) is first converted into an equivalent monotonic optimization problem and then is reduced to a sequence of linear programming problems, based on the linearizing technique. To improve the computational efficiency of the algorithm, two range reduction operations are combined in the branch and bound procedure. The proposed algorithm is convergent to the global minimum of the (SGP) by means of the subsequent solutions of a series of relaxation linear programming problems. And finally, the numerical results are reported to vindicate the feasibility and effectiveness of the proposed method.

#### 1. Introduction

The signomial geometric programming (SGP) problem can be formulated as the following nonlinear optimization problem: where are positive integers and and are all arbitrary real constant coefficients and exponents, respectively. In general, the problem (SGP) corresponds to a nonlinear optimization problem with nonconvex objective function and constraint set. As noted by [1, 2], many nonlinear programming problems may be restated as geometric programming with little additional effort by simple techniques such as change of variables or by straightforward algebraic manipulation of terms. Additionally, (SGP) problem has found a wide range of applications in production planning, location, distribution contexts in risk management problems, various chemical process design and engineering design situations, and so on [3â€“10]. Hence, it is necessary to present good algorithms for solving (SGP).

The theory of (SGP) was initially developed over three decades ago by Duffin et al. [11â€“13]. Subsequently, it had been studied by a number of researchers. In general, local optimization approaches for solving (SGP) problem include three kinds of methods as follows. First, successive approximation by posynomials has received the most popularity [14]. Second, Passy and Wilde [15] developed a weaker type of duality to accommodate this class of nonlinear optimization. Third, general nonlinear programming methods [16]. Though local optimization methods for solving SGP problem are ubiquitous, the global optimization algorithm based on the characteristics of (SGP) problem is scarce. When in is positive integer or rational number, some authors in [8, 17â€“19] developed the corresponding global solution methods for (SGP). In this case that each is real, Maranas et al. [20] proposed a global optimization branch and bound algorithm, by using the exponential variable transformation of (SGP) and the convex relaxation. Shen and Zhang [21] also proposed a global optimization algorithm based on the exponential variable transformation of (SGP) and the linear relaxation. Recently, Shen et al. [22] presented a robust algorithm for (SGP) problem by seeking an essential optimal solution. Wang et al. [23] developed a general algorithm for solving (SGP) problem with nonpositive degree of difficulty. Qu et al. [24] proposed a global optimization algorithm using linear relaxation for (SGP) problem.

In this paper we present a new global optimization algorithm for (SGP) problem by using several reduction operations and by solving a sequence of linear programming problems over partitioned subsets. The proposed method uses a convenient transformation based on the characteristics of (SGP) problem; thus, the original problem (SGP) is equivalently reformulated as a monotonic optimization problem (), that is, the objective function is increasing and all the constrained functions can be denoted by the difference of two increasing functions in problem (). A comparison of this method with other methods reviewed above is given below. First, the proposed linear relaxation is based on the monotonic optimization problem (), which applies more information of the functions of (SGP). And what is more important is that the proposed reduction operations which are adopted in our global optimization algorithm can cut away a large part of the region in which the global optimal solution of (SGP) does not exist. This solution procedure will be more efficient than the methods in [21, 25, 26]. Second, the problem investigated in this paper generalizes those of [8, 17â€“19]. Furthermore, our method is more convenient in computation than the convex relaxation [19] because the main work is to solve the linear programs and the zeros of strictly monotonic functions of one variable over the interval [0,1), which can be solved very efficiently by the existing methods, for example, by the simplex method and the bisection search method. Third, numerical results and comparison with other methods are conducted to show the potential advantage of the proposed algorithm.

The remainder of this paper is organized as follows. The next section converts the (SGP) problem into a monotonic optimization problem. We discuss the rectangular branching operation, the lower bounding operation, and the reducing operations needed in our algorithm in Section 3. Section 4 incorporates this approach into an algorithm for solving (SGP) and shows the convergence property of the algorithm. In Section 5, we report the results of solving some numerical examples with the algorithm. A summary is presented in the last section.

#### 2. Equivalent Problem

In order to convert (SGP) problem into an equivalent optimization problem (), for each , , let us denote By multiplying both sides of each constraint inequality of (SGP) with and by applying the exponent transformation to the formulation (SGP), we can obtain the following equivalent problem:

Next, for convenience, for each , we assume, without loss of generality, that for and for , and some notation is introduced as follows: Thus, by using , , let us calculate Then, by introducing some additional variables , , with , we can convert the problem (SGP1) intowhere Additionally, for the sake of simplicity, let ; the problem () can be rewritten as the following form: where

Note that each function of problem () is increasing (i.e., a function is said to be increasing if for all satisfying , ). Thus problem () is a monotonic optimization problem, and the key equivalent result for problems (SGP) and () is given by Theorem 1.

Theorem 1. * is a global optimal solution for problem (SGP) if and only if is a global optimal solution for problem (P), where
*

*Proof. *The proof of this theorem follows easily from the definitions of problems (SGP) and (); therefore, it is omitted here.

From Theorem 1, notice that, in order to solve problem (SGP), we may solve problem () instead. In addition, it is easy to see that the global optimal values of problems (SGP) and () are equal. Based on the above discussion, here, from now on we assume that the original problem (SGP) has been converted into the problem (); then a general approach will be considered for solving problem ().

#### 3. Key Algorithm Processes

To globally solve the problem (), a branch-reduce-bound (BRB) algorithm will be proposed. This algorithm proceeds according to the standard branch and bound scheme with three key processes: branching, reducing, and bounding.

The branching process consists in a successive rectangular partition of the initial box following in an exhaustive subdivision rule, that is, such that any infinite nested sequence of partition sets generated through the algorithm shrinks to a singleton. A commonly used exhaustive subdivision rule is the standard bisection.

The reducing process consists in applying reduction operations to reduce the size of the current partition set . The process aims at tightening the box containing the feasible portion currently still of interest.

The bounding process consists in using the linearization method to give a better lower bound.

Next, we begin to establish the approaches processes.

##### 3.1. Lower Bound

At a given stage of the BRB algorithm for (), let be a rectangle during the partitioning procedure and still of interest; we intend to compute a lower bound of the optimal value of () over . Restrict the problem () to : Denote the optimal objective function value of problem by .

Since is increasing, an obvious bound is ; although very simple, this bound suffices to ensure convergence of the algorithm. However, the following procedure may give a better bound.

Our main method for computing a lower bound of over is to solve the relaxation linear programming of . The linear relaxation of the problem can be realized by underestimating every function and and by overestimating every function , for each . All the details for generating the linear relaxation will be given in the following.

Denote where . In addition, let where , .

Theorem 2. *Consider the functions , , and , for any , where and . Then the following two statements are valid.*(i)*The function is the concave envelope of the function over , and the function is a supporting hyperplane of , which is parallel with . Moreover, the functions , , and satisfy
*(ii)*The differences and satisfy , where
*

*Proof. *The proof is similar to Theorem 1 in [21]; therefore, it is omitted here.

*Remark 3. *From Theorem 2, we can follow that the functions and enough approximate the function as , respectively.

From Theorem 2, it is obvious that for all we have where .

Consequently, we obtain the following linear programming as a linear relaxation of over the partition set :

An important property of is that its optimal value satisfies and thus, from (21), the optimal value of provides a valid lower bound for the optimal value of over .

Based on the above discussion, for any rectangle , in order to obtain a lower bound of the optimal value of the problem , we may compute such that Clearly, defined in (22) satisfies and is consistent. It can provide a valid lower bound and guarantee convergence.

##### 3.2. Reduction Operations

Clearly, the smaller the rectangle is, the tighter the lower bound of will be, and therefore the closer the feasible solution of will be to the optimal solution of . To show this, the next results give two reduction operations (i.e., reduction rules A and B) to reduce the size of this partitioned rectangle without losing any feasible solution currently still of interest.

###### 3.2.1. Reduction Rule A

Rule A is based on the monotonic structure of the problem . At a given stage of the BRB algorithm for , for a rectangle generated during the partitioning procedure and still of interest, let be the object function value of the best so far feasible solution to problem . Given an , we want to find a feasible solution of such that or else establish that no such exists. So the search for such can then be restricted to the set , where

The reduction rule aims at replacing the rectangle with a smaller rectangle without losing any point , that is, such that . The rectangle satisfying this condition is denoted by with To illustrate how is deduced by this rule, we first define the following functions.

*Definition 4. *Given two boxes and with , for , , the functions , , and are defined by
where denotes the th unit vector of , that is, a vector such that , , and the functions , , and are given in problem (), respectively.

Clearly, the functions , , and are either constant or strictly monotonic over the interval [0,1) from the properties of , , and . By using these functions, can be given as follows.

Theorem 5. * If or for some , then .** If and for each , then , where
**
are given by
*

*Proof. *(i) By the increasing property of , , and , if , then for every . If there exists such that , then for every . In both cases, .

(ii) Given any point satisfying
we will show that . Let

Firstly, we will show that . If , then there exists index such that
We consider the following two cases.*Caseâ€‰â€‰1.* If , then from (31) we have , conflicting with ; that is, .*Caseâ€‰â€‰2*. If , the function must be strictly decreasing in single variable over the interval [0,1). If the function is not strictly decreasing in single variable , we get must be a constant over the interval [0,1). In this case, we have
It follows from the definition of that , contradicting with .

Since the function is strictly decreasing, it follows from (31) and the definition of that
hence,
In addition, since is an increasing function in -dimension variable and , we have
conflicting with .

Based on the above discussion, we have ; that is, in either case.

Secondly, we also can show from that
Supposed that , then there exists some such that
that is, there exists such that
By the definition of , there are the following two cases to consider.*Caseâ€‰â€‰1*. If , then from (38) we have , conflicting with ; that is, .*Caseâ€‰â€‰2*. If , the function is strictly increasing in single variable . If the function is not strictly increasing in single variable , we get must be a constant over the interval [0,1). In this case, we have
or
It follows from the definition of that , which is a contradiction with .

Since the function is strictly increasing, from (31) and the definition of , it implies that
or

Assume that (41) holds; we can derive from (38) that
It follows from and increasing that
conflicting with .

If (42) holds, we obtain from (38) that
since and is increasing, we have
It is a contradiction with .

From the above results, we must have in both cases, and this ends the proof.

*Remark 6. *Clearly, for any , and defined in Theorem 5 must exist and be unique, since the functions , , and are all continuous and increasing.

###### 3.2.2. Reduction Rule B

For any with , without loss of generality, we assume the above relaxation linear problem can be rewritten as

Let where , .

Theorem 7. *For any rectangle , if , then there exists no optimal solution of over ; otherwise, consider the following two cases: if there exists some satisfying and , then there is no optimal solution of over ; conversely, if and for some , then there does not exist optimal solution of over , where
*

Theorem 8. *For any rectangle , if for some , then there exists no feasible solution of problem over , otherwise, consider the following two cases: if there exists some index and satisfying and , then there is no feasible solution of the problem over ; conversely, if and for some and , then there exists no feasible solution of the problem over , where
*

*Proof. *The proof of the Theorems 7 and 8 is similar to Theorems 2 and 3 in [27], respectively; therefore, it is omitted here.

By Theorems 7 and 8, we can give a new reduction rule B to reject some regions in which the globally optimal solution of does not exist. The computation procedure of this rule is summarized as follows.

*Step 1. *Compute in (48). If , let ; otherwise, compute in (49). If and for some , then let and with . If and for some , then let and with .

*Step 2. *For any , compute in (48). If for some , then let ; otherwise, compute in (50) . If and for some and , then let and with . If and for some and , then let and with .

Rule B provides a possibility to cut away all or a large part of the rectangle which is currently investigated by the algorithm procedure.

#### 4. Algorithm and Its Convergence

In this section, a branch-reduce-bound (BRB) algorithm is developed to solve the problem () based on the former discussion. This method needs to solve a sequence of (RLP) problems over partitioned subsets of .

The BRB algorithm is based on partitioning the rectangle into subrectangles, each concerned with a node of the branch and bound tree. Hence, at any stage of the algorithm, suppose that we have a collection of active nodes denoted by , that is, each associated with a rectangle , . For each such node , we will compute a lower bound of the optimal objective function value of () via the optimal value of the RLP() and , so the lower bound of the optimal value of () at stage is given by . We now select an active node to subdivide its associated rectangle into two subrectangles according to the standard branch rule for each new node, reducing it, and then compute the lower bound as before. At the same time, if necessary, we will update the upper bound . Upon fathoming any nonimproving node, we obtain a collection of active nodes for the next stage, and this process is repeated until convergence is obtained.

*Algorithm 1. *Consider the following steps.*Stepâ€‰â€‰0 (Initialization)*. Choose the convergence tolerance . Let and . If some feasible solutions are available, add them to and let ; otherwise, let and . Set .*Stepâ€‰â€‰1 (Reduction)*. (i) Delete every box such that or for some , and denote the remaining still as . If , apply the reduction rule A described in Theorem 5 in Section 3.2 to each box . Let with .

(ii) If , for each box that is currently investigated, we use the reduction rule B in Sub section 3.2 to cut away and denote the left still as .*Stepâ€‰â€‰2 (Bounding)*. If , begin to do for each the following.

(i) Solve the problem to obtain the optimal solution and the optimal value . Let .

(ii) If for every , then set .

(iii) If , compute a point such that ; otherwise, let .

(iv) If , define the new upper bound , and the best known feasible point is denoted by . Set .*Stepâ€‰â€‰3 (Convergence Checking)*. Set .

If , then stop: if , the problem is infeasible; otherwise, is the optimal value and is the optimal solution. Otherwise, select an active node for further consideration.*Stepâ€‰â€‰4 (Branching)*. Divide into two new subrectangles using the standard branch rule and let be the collection of these two subrectangles. Set and return to Step 1.

*Convergence Analysis*. In this subsection, we give the convergence of the proposed algorithm. Assume that the number of globally optimal solutions of (SGP) is finite. Then the above proposed algorithm either terminates finitely at a globally optimal solution or generates an infinite sequence of iteration nodes. If the algorithm terminates at some iteration , then obviously the point is a globally optimal solution and is the optimal value of problem (). If the algorithm is infinite, its convergence is discussed as follows.

Theorem 9. *Assume that the above algorithm is infinite; then it generates an infinite sequence of iterations such that along any infinite branch-and-bound tree any accumulation point of the sequence will be the global minimum of problem ().*

*Proof. *Since the algorithm is infinite, it generates an infinite sequence such that a subsequence of satisfies for . In this case, for every iteration , from [28, 29] there is at least an infinite subsequence of such that
where denotes the feasible region of problem (). We see from [28â€“30] that is a nondecreasing sequence bounded above by , which guarantees the existence of the limit and . Since is an infinite sequence on a compact set, it follows that there exists a convergent subsequence of satisfying and , where is a subsequence of . The linear functions used in the problem are strongly consistent on . Thus, . All that remains is to show that . Since is a closed set, it follows that . Suppose that . Then there exists some , , such that . Since is continuous, the sequence converges to as . By definition of convergence, such that as , and so when , implies that the problem is infeasible. This contradicts the assumption of . Therefore, ; that is, , and the proof is complete.

#### 5. Numerical Results

To verify the performance of the proposed algorithm, we will give some computational results through ten test problems. The algorithm is coded in Compaq Visual Fortran. The simplex method is applied to solve the relaxation linear programming problems. All test problems are implemented in an Athlon(tm) CPU 2.31â€‰GHz with 960â€‰MBâ€‰RAM microcomputer.

*Example 1 (see [22, 31, 32]). *Consider

*Example 2 (see [22, 32]). *Consider

*Example 3 (see [21, 22, 31, 33]). *Consider

*Example 4 (see [21, 22, 33]). *Consider

*Example 5 (see [21, 22, 33]). *Consider

*Example 6 (see [21, 24, 34]). *Consider

*Example 7 (see [24]). *Consider

*Example 8 (see [22]). *Consider