Abstract

This paper presents a global optimization algorithm for solving globally the generalized nonlinear multiplicative programming (MP) with a nonconvex constraint set. The algorithm uses a branch and bound scheme based on an equivalently reverse convex programming problem. As a result, in the computation procedure the main work is solving a series of linear programs that do not grow in size from iterations to iterations. Further several key strategies are proposed to enhance solution production, and some of them can be used to solve a general reverse convex programming problem. Numerical results show that the computational efficiency is improved obviously by using these strategies.

1. Introduction

Consider the following generalized nonlinear multiplicative programming problem: where and , , and are all real numbers with and .

Problem (MP) is worth studying because it frequently appears in many applications, including engineering design [16], economics and statistics [712], manufacturing [13, 14], chemical equilibrium [15, 16], financial optimization [17], plant layout design [18]. On the other hand, many other nonlinear problems, such as quadratic programming (QP), bilinear programming (BLP), linear multiplication programming (LMP), polynomial programming, and generalized geometric programming, fall into the category of (MP).

Problem (MP) usually poses significant theoretical and computational difficulties; that is, it is known to generally possess multiple local optima that are not globally optimal. For example, the problems (LMP), (BLP), and (QP) are multiextremal. Both (LMP) and (MP) are known to be NP-hard problems [19, 20] and global optimization ones. So, it evoked interest of researchers and practitioners. During the past years, many solution algorithms have been proposed to solve special forms of the problem (MP). The methods can be classified as parameter-based methods [2123], branch-and-bound methods [2428], outer-approximation methods [29, 30], mixed branch-and-bound and outer-approximation method [31], vertex enumeration methods [32, 33], outcome-space cutting plane methods [34], and heuristic methods [35, 36].

Up to now, although there has been significant progress in the development of deterministic algorithms for finding global optimal solutions of generalized linear multiplicative programming problem, to our knowledge, little work has been done for globally solving generalized nonlinear multiplicative programming (MP). The purpose of this paper is to develop a reliable and effective algorithm for solving problem (MP). In the algorithm, by making use of a variable transformation the original problem (MP) is first equivalently reformulated as a reverse convex programming (RCP). Then, a linear relaxation programming is generated for a lower bound of the optimal value to problem (RCP) in the branch-and-bound search by using the exponent and logarithmic functions. Compared with other methods reviewed above, the mathematical model considered in this paper is an important extension for the model given in [24, 26, 37], and the presented linear relaxation technique can be looked upon as an extension application for the one proposed in [24, 26, 37]. Moreover, an upper bound updating strategy is given to provide a better bound than the standard branch-and-bound methods (e.g., [24, 2628, 37]) based on the proposed global solution location rule. Also, the reduction cut given in this paper offers a possibility to cut away a large part of the currently investigated region in which the globally optimal solution of (MP) does not exist. And finally, the numerical results show that the proposed algorithm is feasible and the computational advantages are indicated.

The content of this paper is as follows. In Section 2, we present the problem (RCP) that is equivalent to problem (MP). The four key strategies of the algorithm are detailed in Section 3. A precise algorithm statement and its convergence are given in Section 4. Section 5 reports the numerical results of some sample problems by using the algorithm. Some concluding remarks are given in Section 6.

2. Equivalent Reformulation

In this section, we show that any (MP) problem can be transformed into an equivalent reverse convex programming problem with one reverse convex constraint. To see how such a reformulation is possible, some notations will be introduced as follows: Thus, one can convert (MP) into the following equivalent reverse convex programming problem (RCP): where for each , are all convex functions. Furthermore, let be a matrix with and let be a vector with . Then, by using (3) can be rewritten in the form The key equivalent result for problems (MP) and (RCP) is given by the following theorem.

Theorem 1. If is a global optimal solution to problem (RCP), then with , is a global optimal solution for problem (MP). If is a global optimal solution for problem (MP), then with , is a global optimal solution for problem (RCP).

Proof. The proof of this theorem follows easily from the definitions of problems (MP) and (RCP); therefore, it is omitted.

3. Key Strategies of the Algorithm

From Theorem 1, to globally solve problem (MP), the branch-and-bound algorithm to be presented concentrates on globally solving the equivalent problem (RCP). To present the algorithm, we first explain several processes: branching, lower and upper bounding, and reduction cut.

The branching process consists in a successive rectangular partition of the initial box following in an exhaustive subdivision rule; that is, any infinite nested sequence of partition sets generated through the algorithm shrinks to a singleton. A strategy called the bisection of ratio will be used in the branching process.

The lower bounding process consists in deriving a linear relaxation programming of problem (RCP) via a two-part linearization method. A lower bound for the objective function value can be found by solving the linear relaxation programming.

The upper bounding process consists in estimating an upper bound for the objective function value by adopting a new method in this paper. This method is different from the general method, which is to update the upper bound by enclosing all feasible points found while computing the lower bounds of the optimum of the primal problem (RCP).

The reduction cut process consists in applying valid cuts (referred to as reduction cuts) to reduce the size of the current partition set . The cuts aim at tightening the box containing the feasible portion currently still of interest.

Next, we will give the four key strategies for forming the corresponding processes, respectively.

3.1. Bisection of Ratio

The algorithm performs a branching process in that iteratively subdivides the -dimensional rectangle of problem (RCP) into smaller rectangles that are also of dimension . This process helps the algorithm identify a location in of a point that is a global optimal solution for problem (RCP). At each stage of the process, the subdivision yields a more refined partition [28] of a portion of that is guaranteed to contain a global optimal solution. The initial partition consists simply of .

During a typical iteration of the algorithm, , a rectangle available from iteration , is subdivided into two -dimensional rectangles by a process called bisection of ratio , where is a prechosen parameter that satisfies . Let , where for all . The procedure for forming a bisection of ratio of into two subrectangles and can be described as follows.(1)Let (2)Let satisfy (3)Let Clearly, if , then the bisection of ratio is the standard bisection rule.

3.2. Linearization Strategy

For each rectangle created by the branching process, the purpose of this linearization strategy is to obtain a lower bound for the optimal value of the problem .

For each rectangle created by the branching process, the lower bound is found by solving a single linear relaxation programming of problem . To derive the , we adopt two-part linearization method. In the first part, we will derive the lower bounding functions and the upper bounding function of each function . Then, in the second part, we will derive the linear lower bounding function (LLBF) and linear upper bounding function (LUBF) for each sum term of . All the details of this procedure will be given below.

First-part linearization: it is well known that the function is a concave function about the single variable . Let and denote the (LLBF) and (LUBF) of over the interval , respectively. Then, from the concavity of , it follows that where .

Next, to help to derive the (LLBF) and the (LUBF) of each function , we need to introduce some notations. For any , let with ; then we have , where Moreover, let this will imply , where and . Denote Thus, from (11), it follows that the lower bounding function of over , denoted as , has the following form: Similarly, from (12) the upper bounding function of over , denoted as , is as follows: Based on the previous results, we have Therefore, for each , the first-part lower bounding function of in (7), denoted by , can be given by and the first-part upper bounding function of in (7), denoted by , is as follows:

Second-part linearization: with a similar method, we can derive the corresponding (LLBF) and (LUBF) of the function over the interval such that where From (22), we can derive the (LLBF) and (LUBF) of over as follows: where Then, for each , substitute each term in (20) by . We may derive the (LLBF) of over , denoted as , being the following form: and it follows that for all .

If , substitute the terms in (21) by . We can get the (LUBF) of over , denoted as , as follows: and it follows that for all .

From the above discussion for the two kinds of constraints, respectively, we can construct the corresponding linear relaxation programming of problem as follows: Obviously, after the functions are replaced by the corresponding linear functions, the feasible region of problem will be contained in the new feasible region of the , and we can have the following lemma.

Lemma 2. Assume that is the minimum of the problem ; then provides a lower bound for the optimal value of the problem .

3.3. Global Solution Location and Upper Bound Updating

As is known, in the general branch-and-bound algorithm, to update the upper bound of the optimal value for problem (RCP), the usual method is enclosing all feasible points found while computing the lower bounds of the optimum of the primal problem (RCP). In this paper, we will adopt a new method to update the upper bounds, which is different from the usual method. Toward this end, firstly, we will give the global solution location.

Let and . Since the function , are all convex, both sets and are convex. It is clear that the feasible region of the problem (RCP) lies in the set . In the problem (RCP), there are two cases at the global solution denoted by .

Case 1. We have .

Case 2. We have .

In the case 1, the reverse convex constraint at the global solution is called a nonactive constraint; this nonactive constraint can vanish in the primal problem, so the problem (RCP) is equivalent to the following problem: which is a convex programming and can be solved by many effective algorithms. Obviously, if the optimal solution to the above problem satisfies the constraint , then it will solve (RCP).

In the case 2, the problem (RCP) is equivalent to the following problem: In this case, we will always have the following assumption.

Assumption 1. A point is available such that , , .

Clearly, if point does not exist, then we only need to solve the problem (RCP1) to obtain the solution of the primal problem (RCP).

In this paper, we will make our efforts to solve the problem in Case 2. It is expedient to indicate some immediate consequences of the above assumption which will locate the solution of (RCP).

Let denote the bounding of ; that is, .

For every , we can find the point where the line segment meets . Clearly it is as follows: with , which can be determined from the following equation:

Because of the convexity of , and both the points and are in the set , is in the set too.

Lemma 3. For every such that , one has .

Proof. Since , and , from the convexity of and Assumption 1, we have

Corollary 4. Under Assumption 1, if is the optimal solution of problem (RCP), it lies on .

According to the discussion of Lemma 3 and Corollary 4, we know that the optimal solution must lie on the boundary, so once a feasible point is found, we will firstly compute the point which lies on and satisfying , then the upper bound of the optimal value for problem (RCP) is updated as Hence, once a better upper bound is updated, the number of the deleted nodes will increase, and the unnecessary branching and bounding on some regions where the global solution does not exist will decrease greatly.

3.4. Reduction Cut

In this subsection, we pay our attention to how to form the bound reduction technique to accelerate the convergence of the proposed global optimization algorithm.

Assume that is a current known upper bound of the optimal objective value of the problem (RCP). For any with , consider the problem . For the sake of convenience, let the objective function and linear constraint functions of be expressed as Then, from (26) and (27) it follows that

In the following, some notations are introduced as follows: where .

Based on the optimality and feasibility of the problem (RCP), we can give two theorems below to determine the region in which it is guaranteed that there is no optimal solution; thus a reduction cut technique is formed from these theorems.

Theorem 5. For any subrectangle with , the following statements hold.(i)If , then there exists no optimal solution of the problem (RCP) over the subrectangle .(ii)If , consider the following two cases: if there exists some satisfying and , then there is no optimal solution of (RCP) over ; conversely, if and for some , then there does not exist global optimal solution of (RCP) over , where

Theorem 6. For any with , if for some , then there exists no optimal solution of problem (RCP) over ; otherwise, for each , consider the following two cases.(i)If there exists some satisfying and , then there is not optimal solution of the problem (RCP) over .(ii)If and for some , then no optimal solution of the problem (RCP) over exists, where

Note that the proof of Theorems 5 and 6 is similar to the one of Theorems 1 and 2 in [26]; therefore it is omitted.

Reduction Cut

(S1) Optimality Cut. Compute . If , and let ; otherwise compute . If and for some , then let and with . If and for some , then let and with .

(S2) Feasibility Cut. For any , compute and . If for some , then let ; otherwise compute   . If and for some and , then let and with . If and for some and , then let and with .

This reduction cut provides the possibility to cut away a large part of subrectangle that is currently investigated by the algorithm procedure.

4. Algorithm and Its Convergence

Based upon the results and the algorithmic processes discussed in Section 3, the basic steps of the proposed global optimization algorithm are summarized as follows.

Let be the optimal objective function value of and refers to an element of the corresponding argmin.

4.1. Algorithm Statement

Step 1 (initialization). (i)Solve problem (RCP1) with standard convex programming software to obtain the solution of problem (RCP1). If , stop with as the global solution of the primal problem (RCP).(ii)Let the set of all active nodes , the convergence tolerance , the bisection ratio , the upper bound , and the iteration counter .(iii)Find an optimal solution and the optimal value for problem . If , compute as given in Section 3.3; set . Set the initial lower bound .(iv)If , then stop; is a global -optimal solutions for problem (RCP). Otherwise, go to Step 2.

Step 2 (updating the upper bound). Select the midpoint of ; if is feasible to , then compute as given in Section 3.3; update the upper bound .

Step 3 (reduction). For the subrectangle that is currently investigated, we use the reduction cut described in Section 3 to cut away and the remaining part is still denoted as .

Step 4 (branching). Using the strategy of bisection of ratio described in Section 3.1 to get two new subrectangles and denote the set of new partition rectangles as . For each , compute the lower bound and . If there exists some such that one of the lower bounds satisfies or for some or , then the corresponding subrectangle is eliminated from ; that is, , and skip to the next element of .

Step 5 (bounding). If , solve problem to obtain and for each . If , set . Otherwise, if , then compute ; update the upper bound and update such that . The partition set remaining is now and a new lower bound is now .

Step 6 (convergence checking). Set . If , then stop with as the optimal value and as the optimal solution. Otherwise, select an active node such that for further consideration. Set and return to Step 2.

Next, we give the global convergence of the above algorithm. If the algorithm does not stop finitely, then the branching rule guarantees all the intervals to an isolated point for the variables.

Theorem 7. For any , let ; then, for any , , one has as for each .

Proof. First, let us consider the case . Let where .
Obviously, if we want to prove as , we only need to prove and , as .
From (7), (20), and the definition of in (15), we have Since is a concave function about for any , it can attain the maximum at the point . Let ; then by computing we have By the definition of , if , we have , which implies that as . Therefore, we can obtain that
Next, we will prove , as . From (20) and (26), we have Now, let us denote Then can be rewritten in the following form: Since is a convex function about over the interval , it follows that can attain the maximum at the point or . Let and , then through computing, we can derive that Additionally, by the definitions of and , we know that for any ,   and as , and so, we have go to as each approaches zero. This implies that and as ; that is, as . Hence, according to the above discussion, we can follow that
Based on the above discussion, for each , from (40) it follows that
Second, let us consider . Let where ,. By using (7), (21), and the definition of in (15) we have Since is a convex function about over the interval , it can attain the maximum at the point or . Let ; then, by similar discussion as above we can obtain that Since as , we have that and ; that is, , as . Hence, according to the above discussion, we can obtain that
On the other hand, from (27) and (21) it follows that Now, let us denote then, can be rewritten as follows: Since is a concave function about over the interval , it attains the maximum at the point . Let and ; then by computing we can get By the definition of and , we have and as . This implies that as . Therefore, we have
Thus, by (50) and the above discussion, it is obvious that
In summary, according to the above results, the proof is complete.

Remark 8. From Theorem 7, it follows that and will approximate the corresponding functions and as .

Theorem 9. (a) If the algorithm is finite, then upon termination, the incumbent solution being optimal to (RCP) is a global -optimal solution for problem (RCP).
(b) If the algorithm is infinite, then it will generate an infinite sequence of iterations such that along any infinite branch of the branch and bound tree, any accumulation point of the sequence will be the global solution of the problem (RCP).

Proof. (a) If the algorithm is finite, then it terminates in some Step , . Without loss of generality, upon termination, the incumbent solution is denoted as . By the algorithm, it follows that . From (iv) of Steps 1 and 6, this implies that . Let denote the optimal value of problem (RCP); then, by Section 3, we know that . Since is a feasible solution of problem (RCP), we have . Taken together, this implies that Therefore, is a global -optimal solution for problem (RCP). And the proof of part (a) is complete.
(b) When the algorithm is infinite, a sufficient condition for a global optimization to be convergent to the global minimum, stated in [28], requires that the bounding operation must be consistent and the selection operation is bound improving.
A bounding operation is called consistent if at every step any unfathomed partition can be further refined, and if any infinitely decreasing sequence of successively refined partition elements satisfies where is a computed upper bound in stage and is the best lower bound at iteration not necessarily occurring inside the same subrectangle with . In the following, we will show that (61) holds.
Since the employed subdivision process is exhaustive. Consequently, from Theorem 7 and the relationship , the formulation (61) holds; this implies that the employed bounding operation is consistent.
A selection operation is called the bound improving if at least one partition element where the actual upper bound is attained is selected for further partition after a finite number of refinements. Clearly, the employed selection operation is the bound improving because the partition element where the actual upper bound is attained is selected for further partition in the immediately following iteration.
In summary, we have shown that the bounding operation is consistent and that selection operation is the bound improving. Therefore, according to Theorem IV.3 in [28], the employed global optimization algorithm is convergent to the global minimum of (RCP).

5. Numerical Experiments

To demonstrate the potentiality and feasibility of the proposed global optimization algorithm, our numerical experiment is reported in this section. The algorithm is coded in C++ and each linear programming is solved by the simplex method. The convergence tolerance is set to in our experiments.

Example 10 (see [37]). Consider

Example 11 (see [37]). Consider

Example 12 (see [37]). Consider

Example 13 (see [26]). Consider

Example 14 (see [24]). Consider

Example 15 (see [24]). Consider

Example 16. Consider

In order to test the effectiveness of several key strategies in Section 3, we select the above four Examples 1215 by adopting the different strategy in the branch-and-bound search, the corresponding computational results are summarized in Tables 1, 2, 3, and 4, respectively.

In Table 1, by setting , the test does not adopt the reduction cut (i.e., Step 3 of the algorithm is skipped), and the new method to update upper bounds has not been applied; that is, is replaced by in (iii) of Steps 1 and 5 of the algorithm. In Tables 24, the test can be refereed the corresponding of explanation in the header of tables.

In these tables, some notations have been used for column headers, that is, Iter: the number of the algorithm iterations; : the maximal number of the active nodes necessary; and Time: the execution time in seconds.

The computational results show that the proposed algorithm can globally solve the problem (MP) effectively. Furthermore, comparing the numerical results, from Tables 14 it is shown that the proposed several strategies, especially in the reduction cut, upper bound updating, and bisection of ratio , are very effective for decreasing the number of the iteration and the maximal number of the active nodes and the running CPU time.

Additionally, in order to test our algorithm further, we give some other computational results, which are generated randomly. In Table 5 below, the convergence tolerance parameters are set as , where the average CPU times (denoted by Ave. time), average number of iterations (denoted by Ave. Iter), and average longest node number (denoted by Ave. ) are obtained by running the algorithm for 10 times.

Example 17. Consider where , , and are generated randomly in the intervals , , and , respectively. And and are generated randomly in the intervals and .

It is seen from Table 5 that the sizes of and are the main factors affecting the performance of the algorithm. This is mainly because the number of terms in the subproblem linear programs is proportional to or . Also, the CPU time increases as or increases, but not as sharply as or .

6. Concluding Remarks

A deterministic global optimization algorithm is proposed for solving problem (MP). It successfully reduces a complicated problem (MP) to a simpler reverse convex programming (RCP) problem. Based on the characteristics of the problem (RCP), several global optimization strategies are proposed. The first one is bisection of ratio , which provides a more flexible subdivision rule. The second strategy is the linearization method. By adopting the two-part linearization method, the linear relaxation programming of the problem (RCP) can be obtained, whose minimum will provide the lower bound of the minimum of the problem (RCP). The third strategy is global solution location and upper bound updating. This strategy provides a method to locate the global solutions of the (RCP) and decreases the maximal number of the active nodes and the computational effort required during the algorithm. The final strategy is reduction cut as an accelerating device, which can cut away all part or a large part of the currently investigated feasible region in which the global optimal solution does not exist. A branch and bound algorithm is presented in which the four strategies are adopted successfully. The proposed algorithm is convergent to the global solutions. And the numerical results show that our algorithm is effective and feasible. It is noted that the third strategy can be used for solving the general reverse convex programming problems effectively.

Conflict of Interests

The authors declare that there is no conflict of interests regarding the publication of this paper.

Acknowledgment

This research was supported by the National Natural Science Foundation of China (11171094; 11171368).