Research Article  Open Access
XueGang Zhou, JiHui Yang, "Global Optimization for the Sum of ConcaveConvex Ratios Problem", Journal of Applied Mathematics, vol. 2014, Article ID 879739, 10 pages, 2014. https://doi.org/10.1155/2014/879739
Global Optimization for the Sum of ConcaveConvex Ratios Problem
Abstract
This paper presents a branch and bound algorithm for globally solving the sum of concaveconvex ratios problem (P) over a compact convex set. Firstly, the problem (P) is converted to an equivalent problem (P1). Then, the initial nonconvex programming problem is reduced to a sequence of convex programming problems by utilizing linearization technique. The proposed algorithm is convergent to a global optimal solution by means of the subsequent solutions of a series of convex programming problems. Some examples are given to illustrate the feasibility of the proposed algorithm.
1. Introduction
We consider the concaveconvex ratios programming problems as follows:
where , , , are concave and differentiable functions defined on , , , are convex and differentiable functions defined on , is a nonempty, compact convex set, and for each , and for all .
During the past years, various algorithms have been proposed for solving special cases of fractional programming problem. For instance, algorithmic and computational results for single ratio fractional programming can be found in [1, 2] and in the literature cited therein. At present, there exist a number of algorithms for globally solving sum of ratios problem in which the numerators and denominators are affine functions or the feasible region is a polyhedron [3–5]. To my knowledge, four algorithms have been proposed for solving the nonlinear sum of ratios problem [6–9]. Freund and Jarre [10] present a suitable interiorpoint approach for the solution of much more general problems with convexconcave ratios and convex constraints. Shen et al. [11] present a simplicial branch and duality bound algorithm for globally solving the sum of convexconvex ratios problem with nonconvex feasible region.
In this paper, we implement a branch and bound algorithm for globally solving problem . First, although the branch and bound search involves rectangles defined in a space of dimension , branching takes place in a space of only dimension , where is the number of ratios in the objective function of problem . Second, all subproblems that must be solved to implement the algorithm are convex programming problems, each of which is guaranteed to have an optimal solution. Finally, some examples are given to show that the proposed method can treat all of the test problems in finding globally optimal solutions within a prespecified tolerance. The algorithms of this paper were motivated by the seminal works of [12], the generalized concave multiplicative programming problem.
The organization and content of this paper can be summarized as follows. In Section 2, we demonstrate how to convert problem into an equivalent problem . By using the convex envelope of the bilinear function and the special characteristics of quadratic function, we will illustrate how to generate the convex relaxation program for problem in Section 3. In Section 4, the branch and bound algorithm for globally solving is presented. And convergence properties of the algorithm and computational considerations for implementing the algorithm are given. Some numerical examples are given to demonstrate the effectiveness of the proposed algorithm in Section 5. Some concluding remarks are given in Section 6.
2. Equivalent Program
To globally solve problem , the branch and bound algorithm globally solves a problem equivalent to problem . In this section, the following main work is to show how to convert problem into an equivalent nonconvex programming problem .
Let . For each , let . Then, we have the following result.
Proposition 1. Let be an open set containing such that for each , , , for all . Then, for each , the function is semistrictly quasiconcave on .
Proof. For any , it is easy to show that the function is concave and differentiable on . Since is positive, convex, and differentiable on , from Avriel et al. [13], this implies that is semistrictly quasiconcave on .
Proposition 2. Let be defined as in Proposition 1. For each , we consider the problem
Then, any local maximum is also a global maximum of problem .
Proof. Since is a convex set, the result follows directly from Proposition 1 and Theorem 3.37 of [13].
Therefore, can be found by any number of convex programming algorithms. Let . Then, is a fulldimensional rectangle in . Let be defined for each by
For any , define the problem by
Definition 3 (see [14]). Let and be convex subsets of and , respectively. A realvalued function defined on is biconcave if, for each fixed , is a concave function on and, for each fixed , is a concave function on . The following result shows that, for every , the value of can be determined by solving a convex program.
Lemma 4. The objective function of problem is biconcave on .
Proof. For each , let and define by . Therefore, for every and , we have . Notice that , and are convex sets; then it will suffice to show that, for every , is biconcave on . Given . Thus, for any , . For all , are concave, since the function is concave and , . Because the function is positive convex function on and , is also concave on . Therefore, it follows that is a concave function on .
Now, let be a fixed vector. For all , . Since , it follows easily that is a concave function on . The proof is complete.
We now define the problem by
Theorem 5. The problem is equivalent to the problem in the following sense: if is an optimal solution to the problem and if is a corresponding optimal solution of problem with , then is an optimal solution for problem . Moreover, the following relations hold:
Conversely, if is an optimal solution of problem , the value deduced from relation (6) corresponds to an optimal solution of problem and relation (7) holds.
Proof. Let be a global optimal solution for problem , and let solve problem with ; then . Then,
It follows from the definition of and (10) that is a global optimal solution to the problem
Therefore, is global optimal solution to the problem
For each , define for each by
Then, for all , since , is a strictly concave function, and the maximum of over is attained uniquely at . By definition of , . The previous two statements imply that is the unique optimal solution to (12). Therefore, and the objective function value in (12) of is
So, is also the objective function value of in problem . Assume that there exist some such that . Let . Then, , and the objective function value of in problem is
It follows from and (16) that is not a global optimal solution to problem , which is a contradiction. This implies that, for all , ; that is, is a global optimal solution for problem . From (10) and (16), . Since , this completes the proof of the first statement of the theorem.
Now suppose that is a global optimal solution for problem . Then, and , for all . We compute by (7). From the definition of ,
Suppose that for some and is a corresponding optimal solution of problem with . By the first part of the theorem, this implies that . Therefore, since , , and is a global optimal solution for problem .
3. Relaxation Problem for Problem
Let denote or a subrectangle of that is generated by the branch and bound algorithm, where and for all . We consider the following problem:
For each , let , and let satisfy
Since, for every , is a concave function on , then, for each , can be found by solving a convex programming problem. For each , can be chosen to be a sufficiently large positive number.
Now, consider the function which is given for any by
Theorem 6. For each . Moreover, if and is an optimal solution to problem , then .
Proof. Let . On the basis of the definition , for some ,
For every , let
Assume that and . So, is a feasible solution to problem (P()) with objective function value
where the equation follows from (18) and (19). Then, we have . Thus, in order to show the first result in theorem, we only show that does not hold.
Assume that . Based on (18), there exists some feasible solution for problem such that
According to definition of and in the problem , for all and , we have
From (18), (21), and (22), we obtain
Since , this is a contradiction with the definition of . Therefore, the assumption that is false. This implies that , for each .
Now, we show the second part of the theorem. Let be an optimal solution to problem . Then, for each , , . So, we have
Let and , where, for each ,
Then, is a feasible solution for problem , so that, by definition of ,
From (24) and (26), it follows that
since .
It follows form Theorem 6 that problem has the same optimal value with the following problem: where .
In order to construct relaxation problem for problem (PE1), we must use the concept of a concave envelope, which may be defined as follows.
Definition 7 (see [15]). Let be a compact, convex set, and let be upper semicontinuous on . Then, is called the concave envelope of on when(i) is a concave function on ,(ii) for all ,(iii)there is no function satisfying (i) and (ii) such that for some point .
The convex envelope of a function on is defined in a similar manner.
Let, for each , . Then, for each , and the concave envelope of of the quadratic function is given by
where . Let represent the linear lower bounding function of over the interval . Then, by the convexity of the function , the function is given as follows:
Lemma 8. Consider the functions , , and for any , where . Then, the following two statements are valid.(i) is an affine concave envelope of over , and is an affine function corresponding to a supporting hyperplane of the graph of over , which is parallel to . Moreover, we have (ii)When , the differences and satisfy
Proof. Consider the following:(i)obviously,(ii)since the is a concave function about for any , can attain the maximum at the point . Thus, it is not difficult to have
On the other hand, since the is a convex function about for any , can attain the maximum at the point or . Thus,
Obviously, when ,
This completes the proof.
Therefore, for all , we can obtain that is,
For each , , and, from Benson [16], the concave envelope of of the bilinear functions is given for each by and the convex envelope of of the bilinear functions is given for each by
We now define the following problem:
Notice that the optimal value of problem satisfies UB. It is also easy to see that the feasible region of problem is a nonempty compact set. Since the objective function of problem is affine function over this set, problem always has an optimal solution.
4. Algorithm and Convergence
To globally solve problem , the algorithm to be presented uses a branch and bound approach. There are three fundamental processes in the algorithm: a branching process, a lower bounding process, and an upper bounding process.
4.1. Branching Rule
The algorithm performs a branching process in that iteratively subdivides the dimensional rectangle of problem into smaller rectangles that are also of dimension . The branch and bound approach is based on partitioning the set into subrectangles, each concerned with a node of the branch and bound tree, and each node is associated with a relaxation linear subproblem on each subrectangle. These subrectangles are obtained by the branching process, which helps the branch and bound procedure identify a location in the feasible region of problem that contains a global optimal solution to the problem.
During each iteration of the algorithm, the branching process creates a more refined partition of a portion of that cannot yet be excluded from consideration in the search for a global optimal solution for problem (). The initial partition consists simply of , since at the beginning of the branch and bound procedure, no portion of can as yet be excluded from consideration.
During iteration of the algorithm, , the branching process is used to help create a new partition . First, a screening procedure is used to remove any rectangle from that can, at this point of the search, be excluded from further consideration, and is temporarily set equal to the set of rectangles that remain. Later in iteration , a rectangle in is identified for further examination. The branching process is then evoked to subdivide into two subrectangles , . This subdivision is accomplished by a process called rectangular bisection.
Consider any node subproblem identified by the subrectangle , where is defined as before. The branching rule is as follows [17].
Step 1. Let .
Step 2. Let satisfy .
Step 3. Let
The new partition of the portion of remaining under consideration is then given by .
4.2. Lower Bound and Upper Bound
The second fundamental process of the algorithm is the upper bounding process. For each rectangle created by the branching process, this process gives an upper bound UB() for the optimal value of the problem , that is,
For each rectangle created by the branching process, from (40), UB() is found by solving a single convex program .
During each iteration , the upper bounding process computes an upper bound for the optimal value of problem . For each , this upper bound is given by
The lower bounding process is the third fundamental process of the branch and bound algorithm. In each iteration of the algorithm, this process finds a lower bound for . For each , this lower bound is given by where is the incumbent feasible solution for problem ; that is, among all of optimal solutions for problems of the form found through iteration , achieves the largest value of .
4.3. Branch and Bound Algorithm
Based on the results and algorithmic processes discussed in this section, the basic steps of the proposed global optimization are summarized in the following.
Step 0 (initialization). (i) Determine an optimal solution and the optimal value UB to problem PR1. Set , , and . (ii) Set and , and go to iteration .
Iteration .
Step k.1. If , then terminate. is a global optimal solution for problem , and . Then, we can solve problem on the basis of problem with . If , continue.
Step k.2. Subdivide into two rectangles and via the rectangular bisection.
Step k.3. For each , find an optimal solution and the optimal value UB to problem PR1.
Step k.4. Set , and choose so that .
Step k.5. Set .
Step k.6. Delete from all rectangles such that UB.
Step k.7. If , set , set , and go to iteration . Otherwise, set . Choose a rectangle such that UB, set , and go to iteration .
4.4. Convergence
In this subsection, we give the global convergence of the above algorithm. By the construction of the algorithm, when the algorithm is finite, it either finds a global optimal solution for problem or detects that problem is infeasible. It is also possible for the algorithm to be infinite. We will discuss this case in the following.
Denote . Suppose that denotes or a subrectangle of that is generated by the branch and bound algorithm. Then, may be written as where, for any , where for each , are positive scalars such that .
If the algorithm is infinite, by the rectangular bisection, since is finite, there exists an infinite sequence of rectangles in generated by the algorithm such that, for any , and is formed from . By Step 1 of the rectangular bisection process, for some fixed , where for all . Next, let be a sequence of rectangle of this type, and, for all and any , let
Lemma 9. For some subsequence of , the limit rectangle is rectangle in parallel to the coordinate plane.
Proof. By Lemma 5.4 in [17] and the rectangle bisection, there exists a subsequence of such that where . So, either or . Then, which is a rectangle in parallel to the coordinate plane.
Theorem 10. Suppose that the proposed algorithm is infinite, and let be a sequence of rectangles in generated by the algorithm such that, for each . Let . Then, for some subsequence of ,(a),(b)any accumulation point of the sequence of is a global optimal solution of problem .
Proof. Consider the following.(a)Notice that, according to (39) and (40), it follows that, for each ,
and problem PR1() may be rewritten as
where .
By the algorithm, since is infinite, we may choose a sequence of such that, for each , UB. At the same time, without loss of generality, we may assume that have the properties of Lemma 9. Since for each , UB, by the upper bounding process
By applying Lemma 9 repeatedly, we may assume that
where for each , is a rectangle in parallel to the coordinate plane. Let be the feasible domain of problem . For each ,
where the first equation follows, since , from the definition of UB in the upper bounding process, the first inequality follows from Step of the rectangle bisection algorithm and the validity of the upper bounding process, the second inequality follows because , and the third inequality holds by the choice of the incumbent solution in Step of algorithm. For each ; therefore, there is a convergence subsequence of , and, by (56), the limit point of this sequence lies in . Without loss of generality, assume that
By the continuity of on ,
For all , by (57),
Combining (55), (57), and (58), we have
Since for each , this confirms the assertion.(b)By algorithm and (a), we obtain
Let be an accumulation point of ; then for some
By (62), since is a subsequence of ,
From (63) and the continuity of the objective function of problem ,
From (63) and (64), we get
Since the feasible region of problem is a closed set, . It follows from (65) that is global optimal solution for problem ; the proof is complete.
By the algorithm, it may happen that, even after many iterations, may remain nonempty. However, by the convergence result, it follows that, for any , will hold for sufficiently large. In practice, it is recommended that the algorithm be terminated if, for some prechosen, relatively small value of , (66), holds. When termination occurs in this way, it is easy to show that is a global optimal solution, and is a global optimal value for problem (LMP) in the sense that and .
5. Numerical Experiments
To verify performance of the proposed global optimization algorithm, some test problems were implemented. The test problems are coded in C++ and the experiments are conducted on a Pentium IV (3.06 GHZ) microcomputer.
Example 11.
Consider
Prior to initiating the algorithm, we first determine a rectangle . Then, the problem is
where
Solving the linear programming (PR1()) gets initial upper bound , and the lower bound with , . Set . The algorithm finds a global optimal value 4.060819 after 23 iterations at the global optimal solution .
Example 12 (see [11]). Consider See Table 1

Example 13. In this example, we solve 6 different random instances: where and are negative semidefinite, while and are definite, is matrix, and all elements of , , , , , are randomly generated, whose ranges are . Table 2 summarizes our computational results. In Table 2, the following indices characterize performance in algorithm: Ave. CPU (s) is the average CPU times in seconds; Ave. Iter is average number of iterations.

6. Conclusion
In this paper, we present a branch and bound algorithm for solving a class of fractional programming problems . To globally solve problem , we first convert problem into an equivalent problem ; then through linearization method, we obtain a convex relaxation programming problem () of problem . In the algorithm, the branch and bound tree creates rectangular regions that belong to , where is the number of ratios in the objective function of problem . However, the branching process only takes place in , rather than . In addition, all subproblems that must be solved to implement the algorithm are convex programming problems, each of which is guaranteed to have an optimal solution.
Conflict of Interests
The authors declare that there is no conflict of interests regarding the publication of this paper.
Acknowledgments
Project is supported by the Ph.D. StartUp Fund of Natural Science Foundation of Guangdong Province, China (no. S2013040012506), Project Science Foundation of Guangdong University of Finance (no. 2012RCYJ005), and the Postdoctoral Fund of Shenyang Agricultural University (no. 770212025).
References
 T. Ibaraki, “Parametric approaches to fractional programs,” Mathematical Programming, vol. 26, no. 3, pp. 345–362, 1983. View at: Publisher Site  Google Scholar  Zentralblatt MATH  MathSciNet
 S. S. Chadha, “Fractional programming with absolutevalue functions,” European Journal of Operational Research, vol. 141, no. 1, pp. 233–238, 2002. View at: Publisher Site  Google Scholar  Zentralblatt MATH  MathSciNet
 T. Kuno, “A branchandbound algorithm for maximizing the sum of several linear ratios,” Journal of Global Optimization, vol. 22, pp. 155–174, 2002. View at: Google Scholar
 P.P. Shen and C.F. Wang, “Global optimization for sum of linear ratios problem with coefficients,” Applied Mathematics and Computation, vol. 176, no. 1, pp. 219–229, 2006. View at: Publisher Site  Google Scholar  Zentralblatt MATH  MathSciNet
 H. P. Benson, “On the global optimization of sums of linear fractional functions over a convex set,” Journal of Optimization Theory and Applications, vol. 121, no. 1, pp. 19–39, 2004. View at: Publisher Site  Google Scholar  Zentralblatt MATH  MathSciNet
 H. P. Benson, “Using concave envelopes to globally solve the nonlinear sum of ratios problem,” Journal of Global Optimization, vol. 22, pp. 343–364, 2002. View at: Publisher Site  Google Scholar  Zentralblatt MATH  MathSciNet
 H. P. Benson, “Global optimization algorithm for the nonlinear sum of ratios problem,” Journal of Optimization Theory and Applications, vol. 112, no. 1, pp. 1–29, 2002. View at: Publisher Site  Google Scholar  Zentralblatt MATH
 C.T. Chang, “On the posynomial fractional programming problems,” European Journal of Operational Research, vol. 143, no. 1, pp. 42–52, 2002. View at: Publisher Site  Google Scholar  Zentralblatt MATH
 Y.J. Wang and K.C. Zhang, “Global optimization of nonlinear sum of ratios problem,” Applied Mathematics and Computation, vol. 158, no. 2, pp. 319–330, 2004. View at: Publisher Site  Google Scholar  Zentralblatt MATH
 R. W. Freund and F. Jarre, “Solving the sumofratios problem by an interiorpoint method,” Journal of Global Optimization, vol. 19, no. 1, pp. 83–102, 2001. View at: Publisher Site  Google Scholar  Zentralblatt MATH
 P.P. Shen, Y.P. Duan, and Y.G. Pei, “A simplicial branch and duality bound algorithm for the sum of convexconvex ratios problem,” Journal of Computational and Applied Mathematics, vol. 223, no. 1, pp. 145–158, 2009. View at: Publisher Site  Google Scholar  Zentralblatt MATH
 H. P. Benson, “Global maximization of a generalized concave multiplicative function,” Journal of Optimization Theory and Applications, vol. 137, no. 1, pp. 105–120, 2008. View at: Publisher Site  Google Scholar  Zentralblatt MATH
 M. Avriel, W. E. Diewart, S. Schaible, and I. Zang, Generalized Concavity, Plenum, New York, NY, USA, 1988.
 R. Horst and N. V. Thoai, “Decomposition approach for the global minimization of biconcave functions over polytopes,” Journal of Optimization Theory and Applications, vol. 88, no. 3, pp. 561–583, 1996. View at: Publisher Site  Google Scholar  Zentralblatt MATH
 R. Horst and H. Tuy, Global Optimization: Deterministic Approaches, Springer, Berlin, Germany, 1993.
 H. P. Benson, “On the construction of convex and concave envelope formulas for bilinear and fractional functions on quadrilaterals,” Computational Optimization and Applications, vol. 27, no. 1, pp. 5–22, 2004. View at: Publisher Site  Google Scholar  Zentralblatt MATH  MathSciNet
 H. Tuy, Convex Analysis and Global Optimization, Kluwer Academic Publishers, Dordrecht, The Netherlands, 1998.
Copyright
Copyright © 2014 XueGang Zhou and JiHui Yang. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.