About this Journal Submit a Manuscript Table of Contents
Journal of Applied Mathematics
Volume 2013 (2013), Article ID 276245, 7 pages
http://dx.doi.org/10.1155/2013/276245
Research Article

A Global Optimization Algorithm for Sum of Linear Ratios Problem

Institute of Information & System Science, Beifang University of Nationalities, Yinchuan 750021, China

Received 31 January 2013; Accepted 8 May 2013

Academic Editor: Farhad Hosseinzadeh Lotfi

Copyright © 2013 Yuelin Gao and Siqiao Jin. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Abstract

We equivalently transform the sum of linear ratios programming problem into bilinear programming problem, then by using the linear characteristics of convex envelope and concave envelope of double variables product function, linear relaxation programming of the bilinear programming problem is given, which can determine the lower bound of the optimal value of original problem. Therefore, a branch and bound algorithm for solving sum of linear ratios programming problem is put forward, and the convergence of the algorithm is proved. Numerical experiments are reported to show the effectiveness of the proposed algorithm.

1. Introduction

We consider the sum of linear ratios programming problem as the following form: where the feasible domain is -dimensional, nonempty, and bound, . Assume that and in some rectangle which contains , where , , and .

Fractional programming is an important branch of nonlinear optimization and it has attracted many researchers’ concern for several decades. Sum of linear ratios problem is a special class of fractional programming problem; it has wide applications, such as investment, transportation scheme, and economic benefits [13]. From a research point view, sum of ratios problems challenge theoretical analysis and computation because these problems possess multiple local optima that are not globally optimal solutions; it is difficult to solve the global solution.

At present there exist a number of algorithms for globally solving sum of linear ratios problems. When , Konno et al. [4] constructed a similar parametric simplex algorithm which can solve large-scale optimization problems; when , Konno and Abe [5] developed parametric simplex algorithm and constructed an effected heuristic algorithm; when , the literature [6] is a sum of linear ratios problem with coefficients; by using an equivalent transformation and linearization technique, the original nonconvex programming problem reduces to a series of linear programming problems to achieve the purpose of solving it. To minimize the problem, Yanjun et al. [7] use the linearization technique twice by the nature of exponential and logarithmic functions to achieve a linear relaxation programming of the original problem. Benson [8] put forward a new branch and bound algorithm to solve the equivalent concave minimum problem of the original problem. Jiao and Feng [9] present a new pruning technique. In the literature [10], the numerator and denominator of the ratios are not necessarily positive. In this paper, we present a new branch and bound algorithm for solving the sum of linear ratios problems, and the convergence of the algorithm is proved. At last, the numerical experiments are carried out.

This paper is organized as follows. In Section 2, we show how to convert the problem (GFP) into an equivalent problem (EP) by a transformed technique. In Section 3, the linear relaxation programming problem of (EP) is constructed. The branching process of the rectangle is given in Section 4. In Section 5, the branch and bound algorithm for globally solving (EP) is presented and the convergence of the algorithm is proved. In Section 6, some numerical results are given to show the effectiveness of the present algorithm. Finally, the conclusion is given.

2. Equivalent Transformation

Because the set is nonempty and bound, we can construct the rectangle , which contains the feasible region of the problem (GFP), and is the optimal value of the linear programming problem (2) and (3), respectively.

Firstly, we solve the following linear programming problems: The optimal solutions of (4) are and (), and the optimal value is denoted by and () respectively. Obviously, and are feasible to (GFP). Set , where represent the set of the current feasible solution of the problem (GFP). Set where , . Then the problem (GFP) is converted into an equivalent nonconvex programming problem:

Theorem 1 (see [10]). If is a global optimal solution of the problem , then is a global optimal solution of the problem (GFP), and for every , when , ; conversely, if is a global optimal solution of the problem (GFP), then is a global optimal solution of the problem , where .

From Theorem 1, the problems (GFP) and are equivalent; their global optimal values are equal. Therefore, in order to solve (GFP), we only need to solve instead.

3. Linear Relaxation Technique

From Section 2, and are rectangles; set where Because in we have , so expanding it, then we have .

Similarly, we can obtain that , so expanding it, then we have . Let Because , we have the following result:

Similarly, we have , expanding them, then we have ; let Consequently, From formulae (12) and (14), the following formula is obtained:

In the problem , let and , respectively, represent the lower bound and upper bound of ; then From formula (16), we can obtain the linear relaxation programming problem of the problem : The optimal value of the problem is a lower bound of the optimal value of the problem in the feasible region .

Obviously, the problem can equivalently be converted into the following linear programming problem :

The optimal value of the problem can be obtained by solving the linear programming problem , which is a lower bound of the problem in feasible region .

The Determination of Upper Bound. From the process of the determination of lower bound, by solving , we can obtain a global optimal solution ; let It is obvious that is a feasible solution of . Therefore, provide an upper bound for the global optimal value of the problem .

4. Branching

In this algorithm, the branching process is executed in the space of other than in . In general, when , the amount of computation will decrease so that the efficiency of computation will improve. Therefore, we choose the rectangle which contains to branch, and the subrectangle after branching is also -dimensional. Set Denote the initial rectangle or subrectangle of it. The branching rule is as follows:(i)choose the longest side of , that is, ;(ii)let and

5. Algorithm and Its Convergence

The branch and bound algorithm of the problem (GFP) is stated as follows:

Step 1. Choose , the initial rectangle ; we can find an optimal solution and the optimal value by solving the problem . Set , . Set , , .

If , stop. and are global -optimal solutions of problems and (GFP), respectively. Otherwise, set , , , and go to Step 2.

Step 2. Set . Subdivide into two -dimensional rectangles via the branching rule. Set .

Step 3. For , compute . If , find an optimal solution of problem with ; set .

Step 4. Set . If , go to Step 6. Otherwise, continue.

Step 5. If , set ; go to Step 4. Otherwise, set Let If , go to Step 4. If , set . Let

Step 6. Set .

Step 7. Set . Let satisfy .

If , stop. and are global -optimal solutions of the problems and (GFP), respectively. Otherwise, set and go to Step 2.

Next, the convergence of the algorithm is stated in the following theorem.

Theorem 2. (a)  If the algorithm is finite, and are global -optimal solutions of the problems and (GFP), respectively.
(b)  For , let denote the incumbent solution at the end of step . If the algorithm is infinite, then is a feasible solution sequence, whose every accumulation point is a global optimal solution of the problem (GFP), and

Proof. (a) If the algorithm is finite, without loss of generality, it terminates in step , since is obtained by solving problem , for some and optimal solution , set where is a feasible solution of the problem (GFP) and is a feasible solution of problem . When , the algorithm terminates. From Steps 1, 2, and 5, it is implied that ; by the algorithm, it shows that . Since is a feasible solution of the problem , therefore, .
Taken together, it is implied that Therefore, From the formula , we have From (27), this implies that The proof of (a) is complete.
(b) If the algorithm is infinite, then it generates a sequence of incumbent solutions of the problem , denoted by , for each , is obtained by solving the problem . For some and optimal solution , set Then the sequence consists of feasible solutions of the problem (GFP).
Suppose that is an accumulation point of . Assume without loss of generality that . Since is a compact set, . Furthermore, because is infinite, we assume without loss of generality that, for each , , for some point , Set , for each ; let . Since , from Step 5, we know that is a nonincreasing sequence, and is a finite number and satisfies For each , from Step 3, we know that is equal to the optimal value of the problem and that is an optimal solution of this problem. From (31), we have Since , , and the continuity of , This implies that is a feasible solution of the problem . Therefore, Together with (32), we have Since the branching process is bisection and the branching process of rectangle is exhaustive, we have Therefore, is a global optimal solution of the problem . By Theorem 1, this implies that is a global optimal solution of the problem (GFP). For each , since is the incumbent solution of the problem (GFP) at the end of step , ; by the continuity of , we obtain that Since is a global optimal solution of the problem (GFP), Therefore, . The proof is complete.

6. Numerical Experiment

The proposed algorithm is programmed in MATLAB 7.8 and is run in Pentium(R) 4 CPU 3.20 GHz. In order to compare with the algorithm of the literature [10], we perform three experiments to the literature [10].

Example 1 (see [10]). We choose ; for each , the numerator and denominator are and all satisfy

From our algorithm, we firstly should solve the following linear programming problems: of which the optimal solutions denote by ; then where represent the set of the current feasible solution of the problem , and the optimal value is denoted by and ; then the initial rectangle is By solving the linear relaxation programming problem , we obtain the optimal solution and the optimal value ; then a lower bound of the original problem is . Set Then is a feasible solution of , , then it provides an upper bound for the global optimal value of the problem . Next, we choose the rectangle corresponding with the lower bound to branch; we obtain the following rectangles via our algorithm: We solve the linear relaxation programming problem in rectangles and , respectively. In , the optimal solution and the optimal value are and ; then in rectangle , the lower bound of the original problem is , and the upper bound corresponding with the optimal solution is 4.9617 (>4.9126), so the upper bound is unchanged. In , the optimal solution and the optimal value are ; and ; then in rectangle , the lower bound of the original problem is , and the upper bound corresponding with the optimal solution is 4.9323 (>4.9126), so the upper bound is also unchanged. Then we choose the rectangle corresponding with the lower bound to branch until the 55th iteration, and we can obtain that we solve the linear programming problem in ; the lower bound is ; it satisfies the terminated rule. Therefore, the optimal value and the optimal solution of the original problem are and ; the lower bound of the optimal value is , which is approximate optimal value. The accuracy is .

The above example satisfies , where denote the number of variables; our algorithm can have a good approach within accuracy. In Example 2, ; in Example 3, we still get good results. Along with the increase of and , the computation complexity is increasing. For example, in Example 3, , we can quickly obtain the approximate optimal value and the optimal value by using this paper’s algorithm, but its effect is poorer than the former example. The result of Example 1 is shown in Table 1.

tab1
Table 1

Example 2 (see [10]). The optimal value is 2.8619.

Example 3 (see [10]). The optimal value is 3.7109.

We choose ; then the approximate optimal solution satisfying accuracy and the iteration times and CPU running time are obtained. The results of our algorithm are shown in Table 2. But the results of the literature [10] are shown in Table 3.

tab2
Table 2
tab3
Table 3

According to Tables 2 and 3, in Example 1, although the optimal solution of the literature [10] is feasible, its optimal value 5 is bigger than 4.9126 of our algorithm; in Example 2, the optimal solution of the literature [10] turns out to be infeasible; in Example 2, the optimal value 4.0000 which corresponds to the optimal solution of the literature [10] is actually 3.8384, but it is still bigger than 3.7109 of our algorithm.

From the above comparison we know that the optimal values of our algorithm are much lesser than in the literature [10], and except for Example 1, the iterations of Examples 2 and 3 are much lesser than in the literature [10]. Although our running time is longer than the literature [10], if we can solve the more accurate optimal solution, the price we pay is acceptable.

In conclusion, our algorithm is feasible and effective, and to some degree, it is better than in the literature [10].

7. Conclusion

In this paper, the solving of the sum of linear ratios programming problem is discussed. The problem is equivalently transformed into bilinear programming problem, then by using the linear characteristics of convex envelope and concave envelope of double variables product, the linear relaxation programming of the bilinear programming problem is given, which can determine the lower bound of the optimal value of original problem. Therefore, a branch and bound algorithm for solving sum of linear ratios programming problem is proposed and the convergence of the algorithm is proved. Numerical results show the effectiveness of the algorithm, and our algorithm is better than the calculation results of the literature [10].

Acknowledgments

The work is supported by the Foundation of National Natural Science, China (11161001), and by the research project of Beifang University of Nationalities (2013XYZ025).

References

  1. H. Konno and H. Watanabe, “Bond portfolio optimization problems and their applications to index tracking: a partial optimization approach,” Journal of the Operations Research Society of Japan, vol. 39, no. 3, pp. 295–306, 1996. View at Zentralblatt MATH · View at MathSciNet
  2. J. E. Falk and S. W. Palocsay, “Optimizing the sum of linear fractional functions,” in Recent Advances in Global Optimization (Princeton, NJ, 1991), Princeton Series in Computer Science, pp. 221–258, Princeton University Press, Princeton, NJ, USA, 1992. View at MathSciNet
  3. R. Horst, P. M. Pardalos, and N. V. Thoai, Introduction to Global Optimization, vol. 48 of Nonconvex Optimization and Its Applications, Kluwer Academic Publishers, Dordrecht, The Netherlands, 2nd edition, 2000. View at MathSciNet
  4. H. Konno, Y. Yajima, and T. Matsui, “Parametric simplex algorithms for solving a special class of nonconvex minimization problems,” Journal of Global Optimization, vol. 1, no. 1, pp. 65–81, 1991. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet
  5. H. Konno and N. Abe, “Minimization of the sum of three linear fractional functions,” Journal of Global Optimization, vol. 15, no. 4, pp. 419–432, 1999. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet
  6. P.-P. Shen and C.-F. Wang, “Global optimization for sum of linear ratios problem with coefficients,” Applied Mathematics and Computation, vol. 176, no. 1, pp. 219–229, 2006. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet
  7. W. Yanjun, S. Peiping, and L. Zhian, “A branch-and-bound algorithm to globally solve the sum of several linear ratios,” Applied Mathematics and Computation, vol. 168, no. 1, pp. 89–101, 2005. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet
  8. H. P. Benson, “Solving sum of ratios fractional programs via concave minimization,” Journal of Optimization Theory and Applications, vol. 135, no. 1, pp. 1–17, 2007. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet
  9. H. W. Jiao and Q. G. Feng, “Global optimization for sum of linear ratios problem using new pruning technique,” Mathematical Problem in Engineering, vol. 2008, Article ID 646205, 12 pages, 2008. View at Publisher · View at Google Scholar
  10. C.-F. Wang and P.-P. Shen, “A global optimization algorithm for linear fractional programming,” Applied Mathematics and Computation, vol. 204, no. 1, pp. 281–287, 2008. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet