Abstract

This paper is concerned with an efficient global optimization algorithm for solving a kind of fractional program problem , whose objective and constraints functions are all defined as the sum of ratios generalized polynomial functions. The proposed algorithm is a combination of the branch-and-bound search and two reduction operations, based on an equivalent monotonic optimization problem of . The proposed reduction operations specially offer a possibility to cut away a large part of the currently investigated region in which the global optimal solution of does not exist, which can be seen as an accelerating device for the solution algorithm of . Furthermore, numerical results show that the computational efficiency is improved by using these operations in the number of iterations and the overall execution time of the algorithm, compared with other methods. Additionally, the convergence of the algorithm is presented, and the computational issues that arise in implementing the algorithm are discussed. Preliminary indications are that the algorithm can be expected to provide a practical approach for solving problem provided that the number of variables is not too large.

1. Introduction

Consider the following generalized polynomial fractional programs: where and , , , , , , and are all arbitrary real number.

Problem is worth studying because it frequently appears in many applications, including financial optimization, portfolio optimization, engineering design, manufacturing, chemical equilibrium (see, e.g., [18]), etc. On the other hand, many other nonlinear problems, such as quadratic program, linear (or quadratic, polynomial) fractional program [913], linear multiplication program [1416], polynomial program, and generalized geometric program [1720], can be all put into this form.

The problem is obviously multiextremal, for its special cases such as quadratic program, linear fractional program, and linear multiplication program are multiextremal, which are known to be NP-hard problems [21], and it, therefore, falls into the domain of global optimization problems.

In the last decades, many solution algorithms have been developed to globally solve special cases of problem (see, e.g., [914, 1719, 22, 23]), but the global optimization algorithms for the general form of are scarce. Recently, by using the linear relaxation methods, Wang and Zhang [24], Shen and Yuan [25], and Jiao et al. [26] gave the corresponding global optimization algorithms for finding the global minimum of , respectively. Also, Fang et al. [27] presented a canonical dual approach for minimizing the sum of a quadratic function and the ratio of two quadratic functions.

In this paper, we will suggest an efficient algorithm for solving globally problem . The goal of this research is fourfold. First, by introducing variables and by using a transformation, the original problem is equivalently reformulated as a monotonic optimization problem based on the characteristics of problem . That is to say, the objective function is increasing and all the constrained functions can be denoted as the difference of two increasing functions in . Second, in order to present an efficient algorithm for solving problem , the two reduction operations are incorporated into the branch-and-bound framework to suppress the rapid growth of the branching trees so that the solution procedure is enhanced. The proposed reduction cut operation especially does not appear in other branch-and-bound methods (see [24, 25]) and is more easily implementable than the one in [28], because the latter (see (2.4) and (2.5) in [28]) is computed by solving the nonlinear nonconvex programming, but the former is involved in solving the roots of several equations in a single variable and with strict monotonicity. Third, by utilizing directly the proposed algorithm, one also can obtain the essential upper and lower bounds of denominator of each ratio in the objective function to problem , where these bounds are tighter than the ones given by Bernstein algorithm (see [24, 25]), and so the assumption 1 in [24, 25] is not necessary in this paper. Finally, numerical results show that the proposed algorithm is feasible and effective.

The paper is organized as follows. In Section 2, an equivalent reformulation of the original problem is given. Next, Section 3 presents and discusses the algorithm basis process for globally solving problem . The algorithm is presented and its convergence is shown in Section 4. In Section 5, the computational results are presented.

2. Equivalent Monotonic Reformulation

For the convenience of the following discussion, assume that there exist positive scalars , such that and for all , for each . In fact, , can be obtained by the algorithm to be proposed in this paper (see Section 5); define, therefore, the set

Without loss of generality, assume that , and , , . By introducing variables , , the problem is then equivalent to the following problem:

Theorem 1. If is a global optimal solution for problem , then , , and is a global optimal solution for problem . Conversely, if is a global optimal solution for problem , then is a global optimal solution for problem , where , .

Proof. See Theorem 1 in [24]; it is omitted here.

In what follows, we show that problem can be transformed into a monotonic optimization problem such that the objective function is increasing and all the constrained functions are the difference of two increasing functions. To see how such a reformulation is possible, we first consider each constraint of . Let For any , , it follows from each constraint of that By using the above notation, one can thus convert into the form where for and for . Note that all the exponents are positive in the constraints of problem . Thus, by applying the following exponent transformation to the formulation , letting and , and by changing the notation, an equivalent problem of problem can be then given by where

Next, we turn to consider the objective function of . For convenience, for each , we assume, without loss of generality, that for and for . In addition, some notations are introduced as follows: Then, by introducing an additional vector , we can convert the problem into where is defined in .

Note that the objective function of is increasing and each constrained function is the difference of two increasing functions. The key equivalent result for problems and is given by the following Theorem 2.

Theorem 2. is a global optimal solution for problem if and only if is a global optimal solution for problem , where

Proof. The proof of this theorem follows easily from the definitions of problems and ; therefore, it is omitted here.

From Theorem 2, for solving problem , we may solve problem instead. In addition, it is easy to see that the global optimal values of problems and are equal. Let with and and let then, without loss of generality, by changing the notation, problem can be rewritten as the following form: where Based on the above discussion, to globally solve problem , the algorithm to be presented concentrates on solving the problem ; then a bound-reduction-bound (BRB) algorithm to be presented will be considered for the problem .

3. Basic Operations

In order to solve globally the problem , the main idea of (BRB) approach to be proposed consists of several basic operations: successively refined partitioning of the feasible set; estimation of lower bound for the optimal value of the objective function over each subset generated by the partitions; and the reduction operations by reducing the size of each partition subset without losing any feasible solution currently still of interest. Next, we begin the establishment of the approach with the basic operations needed in a branch and bound scheme.

Let denote the rectangle or subrectangle of generated by the algorithm. Consider the following subproblem:

3.1. Partition Rule

The critical element in guaranteeing convergence to a minimum of is the choice of a suitable partition strategy. In this paper, we choose the standard branching rule. This method is sufficient to ensure convergence since it derives all the intervals to a singleton for all the variables that are associated with the term that yields the greatest discrepancy in the employed approximation along any infinite branch of the branch-and-bound tree.

Consider any node subproblem identified by rectangle . The procedure for dividing into two subrectangles and can be described as follows.(i)Let (ii)Let

Through this branching rule, the rectangle is partitioned into two subrectangles and .

3.2. Lower Bound

For each rectangle , we intend to compute a lower bound of the optimal value of over ; that is, compute a number such that

To ensure convergence, this lower bound must be consistent in the sense that, for any infinite nested sequence of boxes shrinking to a single point ,

Clearly, a lower bound is , and any bound such that will satisfy (21) since is increasing.

Although the bound (for a box ) is sufficient for guaranteeing convergence, for a better performance of the lower bound procedure, tighter bounds are often necessary to achieve reasonable efficiency. For instance, the following procedure may give a better bound.

Consider the subproblem and denote the optimal value of problem by . Our main method for computing a valid lower bound of over is to solve the relaxation linear programming of by using a linearization technique. This technique can be realized by underestimating every function and and by overestimating every function , for each . All the details of this linearization technique for generating the linear relaxation will be given in what follows. For this purpose, let us denote then we have and for any box , where and for each , Additionally, let where , .

Thus, from Theorem 1 in [20], it follows that and that and satisfy where

Next, we will give the relaxation linear functions of , , and over . Based on the above discussion, it is obvious that we have, for all , where .

Consequently, we obtain the following linear programming as a linear relaxation of over the partition set : where An important property of is that its optimal value satisfies Thus, from (33), the optimal value of provides a valid lower bound for the optimal value of over .

Based on the above result, for any rectangle , in order to obtain a lower bound of the optimal value to subproblem , we may compute such that where is the optimal value of the problem .

Clearly, defined in (34) satisfies and is consistent. It can provide a valid lower bound and guarantee convergence.

3.3. Reduction Operations

Clearly, the smaller the rectangle is, the tighter the lower bound of will be and, therefore, the closer the feasible solution of will be to the corresponding optimal solution. To show this, the next results give two reduction operations, including the reduction cut and the deleting technique, to reduce the size of the partitioned rectangle without losing any feasible solution currently still of interest.

(1) Reduction Cut. At a given stage of the branch and bound algorithm for , for a rectangle generated during the partitioning procedure and still of interest, let be the objective function value of the best so far feasible solution to problem . Given an , we want to find a feasible solution of such that or else establish that no such exists. So, the search for such can then be restricted to the set , where

The reduction cut is based on the monotonic structure of the problem . The reduction cut aims at replacing the rectangle with a smaller rectangle without losing any point , that is, such that . The rectangle satisfying this condition is denoted by with . To illustrate how is deduced by reduction cut, we first define the following functions.

Definition 3. Given two boxes and with , for , the functions and are defined by where is a unit vector with 1 at the th position and 0 everywhere else, .

From the functions , , and , we have the following result.

Theorem 4. Let be given and let . If or for some , then . Otherwise, are given by satisfying

Proof. (i) By the increasing property of , , and , if , then for every . If there exists such that , then for every . In both cases, .
(ii) Given any point satisfying we will show that . Let
Firstly, we will show that . If , then there exists index such that We consider the following two cases.
Case 1. If , then, from (42), we have , conflicting with ; that is, .
Case 2. If , the function must be strictly decreasing in single variable over the interval . If the function is not strictly decreasing in single variable , we get that must be a constant over the interval . In this case, we have It follows from the definition of that , contradicting with .
Since the function is strictly decreasing, it follows, from (42) and the definition of that . Hence, . In addition, since is an increasing function in -dimension variable and , we have conflicting with .
Based on the above discussion, we have ; that is, in either case.
Secondly, we also can show from that ; that is, . Supposing that , then there exists some such that that is, there exists such that By the definition of , there are the following two cases to consider.
Case 1. If , then, from (45), we have , conflicting with ; that is, .
Case 2. If , the function is strictly increasing in single variable . If is not strictly increasing in , we get that must be a constant over . In this case, we have or It follows from the definition of that , which is a contradiction with .
Since the function is strictly increasing, from (46) and the definition of , it implies that or
Assume that (49) holds; we can derive, from (46), that It follows from and that conflicting with .
If (50) holds, we obtain, from (46), that Since and is increasing, we have It is a contradiction with .
From the above results, we must have ; that is, in both cases and this ends the proof.

Remark 5. and given in Theorem 4 must exist and be unique, since the functions , , and are all continuous and increasing.

Remark 6. In order to obtain , the computation of and is more easily implementable than that of (2.4) and (2.5) in [28]. This is because the latter is computed by solving the nonlinear nonconvex programming problem, but the former is involved in solving the single variable equation with monotonicity.

(2) Deleting Technique. For any with , without loss of generality, we assume that the relaxation linear problem can be rewritten as and let be a known upper bound of the optimum of . Define where .

Theorem 7. For any , if , then there exists no optimal solution of the problem over . Otherwise, if and , for some , then there is no optimal solution of the problem over the subrectangle ; conversely, if and , for some , then there does not exist optimal solution of over , where

Proof. The proof is similar to Theorem 2 in [27]; it is omitted here.

Theorem 8. For any , if , for some , then there exists no feasible solution of problem over . Otherwise, consider the following two cases: if there exists some index satisfying and , for some , then there is no feasible solution of the problem over ; conversely, if and , for some and , then there exists no feasible solution of the problem over , where

Proof. The proof is similar to Theorem 3 in [27]; it is omitted here.

By Theorems 7 and 8, we can give a new deleting technique to reject some regions in which the globally optimal solution of does not exist. Let with be any subrectangle of . The content of deleting technique is summarized as follows.

(S1) Optimality Rule. Compute in (56). If , let ; otherwise, compute in (57). If and , for some , then let and with . If and , for some , then let and with .

(S2) Feasibility Rule. For any , compute in (56). If , for some , then let ; otherwise, compute in (58) . If and , for some and , then let and with . If and , for some and , then let and with .

This deleting technique provides a possibility to cut away all or a large part of the subrectangle which is currently investigated by the algorithm procedure.

4. Algorithm and Its Convergence

Now, a branch-reduce-bound (BRB) algorithm is developed to solve the problem based on the former discussion. This method needs to solve a sequence of (RLP) problems over partitioned subsets of .

The BRB algorithm is based on partitioning the rectangle into subrectangles, each concerned with a node of the branch-and-bound tree. Hence, at any stage of the algorithm, suppose that we have a collection of active nodes denoted by , where each node is associated with a rectangle and . For each such node , we will compute a lower bound of the optimal objective function value of via the optimal value of the and , so the lower bound of the optimal value of at stage is given by . We now select an active node to subdivide its associated rectangle into two subrectangles according to branch rule described in the Section 3.1. For each new node, reduce it and then compute the lower bound as before. At the same time, if necessary, we will update the upper bound . Upon fathoming any nonimproving node, we obtain a collection of active nodes for the next stage, and this process is repeated until convergence is obtained.

4.1. Algorithm Statement

Step 1 (initialization). Choose the convergence tolerance . Let and . If some feasible solutions are available, add them to and let ; otherwise, let and . Set .

Step 2 (reduction). (i) Apply the reduction cut described in Section 3.3 to each box . Let with .
(ii) If , for each box , we use the deleting technique in Section 3.3 to cut away and denote the left still as .

Step 3 (bounding). If , begin to do, for each , the following.(i)Solve the problem to obtain the optimal solution and the optimal value . If is feasible to problem , then set . Let .(ii)If for every , then set .(iii)If , define the new upper bound , and the best known feasible point is denoted by . Set .

Step 4 (convergence checking). Set .
If , then stop; if , the problem is infeasible; otherwise, is the optimal value and is the optimal solution. Otherwise, select an active node for further consideration and let .

Step 5 (branching). Divide into two new subrectangles using the branching rule and let be the collection of these two subrectangles. Set and return to Step 2.

4.2. Convergence Analysis

In this subsection, we give the convergence of the proposed algorithm.

Theorem 9. If the presented algorithm finishes at finite step, when it stops, must be a global optimum solution of the problem . Otherwise, for any infinite branch of initial rectangle domain, an infinite partitioned rectangle sequence will be produced and any accumulation point of which must be a global optimal solution of the initial problem .

Proof. Assume that this algorithm terminates finitely at some stage , . Thus, when the algorithm terminates, it follows that . By Steps 2 and 4 of the investigated algorithm, there exists a feasible solution of the problem satisfying , which implies that . Let be the optimal value of the problem ; then, by the structure of this algorithm, we have . Since is a feasible solution of the problem , . Connecting the above inequalities, we have ; that is, . Thus, is an -global optimum solution of the problem .
If the algorithm is infinite, it generates an infinite sequence such that a subsequence of satisfies for . Thus, it follows from [28, 29] that this rectangle subdivision is exhaustive. Hence, for every iteration , by design of the algorithm, there is at least an infinite subsequence of such that Since is a nondecreasing sequence bounded above by , where is the feasible set to problem , this guarantees the existence of the limit and . Since is an infinite sequence on a compact set , there exists a convergent subsequence of satisfying and , where is a subsequence of . By using Theorem 1 and Lemma 1 of [25], we see that the linear subfunctions used in the problem are strongly consistent on . Thus, . All that remains is to show that . Since is a closed set, it follows that . Suppose that . Then there exists some , , such that . Since is continuous, by Theorem 1 and Lemma 1 of [25], we have as , that is, such that as , and so when , implies that the problem is infeasible. This contradicts the assumption that is the optimal solution to . Therefore, ; that is, .

5. Numerical Experiments

There are two computational issues that may arise in using the suggested implementations of the global algorithm.

The first computational issue is concerned with the fact that we need to obtain the positive scalars and such that for all before using the suggested implementations of the algorithm (see Section 2). Actually, and are available through solving the following two problems: The problems and are special cases of the original problem , by using the proposed algorithm; therefore, the values of and can be obtained directly without requiring other special procedure (see [24, 25]). Furthermore, the interval obtained is tighter than one by making use of Bernstein algorithm in [24, 25] (see Table 1) so that the convergence of the algorithm may be improved.

The second computational issue concerns the lower bound computing process. From Section 3, each lower bound in the algorithm is computed by solving a relaxation linear programming of the form of problem (RLP). Here, we adopt the simplex algorithm to solve the relaxation linear programming. So the implement of the proposed global algorithm will depend upon the simplex algorithm.

We now report our numerical experiments through five test examples and some randomly produced problems to demonstrate the performance of the proposed optimization algorithm. The algorithm is coded in Compaq Visual Fortran, and all test problems are implemented in an Athlon(tm) CPU 2.31 GHz with 960 MB RAM microcomputer. The numerical results for all test problems are summarized in Tables 2 and 3. Numerical results show that the proposed algorithm can globally solve the problem effectively.

In the following tables, the notations have been used for column headers: Iter: number of algorithm iterations; Time: execution time in seconds; and : convergence tolerance. And for row headers, BRB denotes the corresponding numerical results in the proposed BRB algorithm.

Example 1 (see [25]). Consider

Example 2 (see [25]). Consider

Example 3 (see [25]). Consider

Example 4 (see [24]). Consider

Example 5 (see [24]). Consider

From Table 1, by using the proposed method, the upper and lower bounds of are better than other methods [25]; that is, the values of and are all bigger than other methods and the values of and are all smaller than other methods.

From Table 2, numerical results show that the computational efficiency is obviously improved by using the proposed algorithm in the number of iterations and the overall execution time of the algorithm, compared with other methods [24, 25].

Additionally, we choose the following problem to test our algorithm further, which is generated randomly: where is an integer number (e.g., is taken to be in Table 3), for each , , with , each element of is randomly generated from , and are randomly generated from . The elements of and with and are generated by using random number in the intervals and , respectively.

For the above test problem, the convergence tolerance parameter is set as and . Numerical results are summarized in Table 3, where the average number of iterations, average number of list nodes, and average CPU times (seconds) are obtained by running the BRB algorithm for 20 times to this problem.

It is seen from Table 3 that the size of (the number of variable) is the main factor affecting the performance of the algorithm. This is mainly because we have to take much time to compute the bound of introduced variables, which is increased as the size of the number of variable increasing. However, due to the constrained functions value in reduction operation and linear relaxation, the CPU time also increases, while (the number of inequality constrained) is increasing but not as sharply as .

Conflict of Interests

The authors declare that there is no conflict of interests regarding the publication of this paper.

Acknowledgment

The research is supported by the National Natural Science Foundation of China (11171094 and 11171368).