Mathematical Problems in Engineering

Volume 2008 (2008), Article ID 646205, 12 pages

http://dx.doi.org/10.1155/2008/646205

## Global Optimization for Sum of Linear Ratios Problem Using New Pruning Technique

^{1}Department of Mathematics, Henan Institute of Science and Technology, Xinxiang 453003, China^{2}College of Mechanical and Electric Engineering, Henan Institute of Science and Technology, Xinxiang 453003, China^{3}Jiangsu Provincial Key Laboratory of Modern Agricultural Equipment and Technology, Jiangsu University, Zhenjiang 212013, China^{4}College of Mathematics and Information Science, Henan Normal University, Xinxiang 453007, China

Received 7 June 2008; Accepted 21 November 2008

Academic Editor: Alexander P. Seyranian

Copyright © 2008 Hongwei Jiao et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

#### Abstract

A global optimization algorithm is proposed for solving sum of general linear ratios problem (P) using new pruning technique. Firstly, an equivalent problem (P1) of the (P) is derived by exploiting the characteristics of linear constraints. Then, by utilizing linearization method the relaxation linear programming (RLP) of the (P1) can be constructed and the proposed algorithm is convergent to the global minimum of the (P) through the successive refinement of the linear relaxation of feasible region and solutions of a series of (RLP). Then, a new pruning technique is proposed, this technique offers a possibility to cut away a large part of the current investigated feasible region by the optimization algorithm, which can be utilized as an accelerating device for global optimization of problem (P). Finally, the numerical experiments are given to illustrate the feasibility of the proposed algorithm.

#### 1. Introduction

Consider the following sum of general linear ratios problem:where , is a natural number, are all finite affine functions on such that for all .

Problem (P) has attracted the interest of researchers for many years. This is because problem (P) has a number of important applications, including multistage shipping problems, cluster analysis, and multiobjective bond portfolio [1, 2]. However, some computational difficulties can be encountered, since multiple local optima of problem (P) that are not globally optimal exist.

In the last decades, many solution algorithms have been proposed for globally solving special cases of the (P), which are intended only for the sum of positive linear ratios problem with assumption that and for all [2, 3]. In [4], Kuno proposed a new method for solving the maximization of sum of linear ratios, which used a concave function to overestimate the optimal value of the original problem. A global optimization method was considered by Jiao et al. [5] by introducing parameters, then the global optimal solution can be derived using linear relaxation and branch and bound algorithm. Recently, in [6], Ji et al. presented a deterministic global optimization algorithm for the linear sum-of-ratios problem, and Jiao and Chen [7] give a short extensive application for the algorithm proposed in [6]. Though optimization methods for special forms of the (P) are ubiquitous, to our knowledge, little work has been done in the literature for globally solving the sum of general linear ratios problem (P) which the numerators and denominators of the ratios may be arbitrary value except that the denominators of the ratios are nonzero over the feasible region considered in this paper.

The purpose of this paper is to develop a deterministic algorithm for solving sum of general linear ratios problem (P) which the numerators and denominators of the ratios may be arbitrary value except that the denominators of the ratios are nonzero over the feasible region. The main feature of the algorithm is described as follows. (1) An equivalent optimization problem (P1) of the (P) is derived by exploiting the characteristics of this linear constraints. (2) A new linearization method is proposed to linearize the objective function of the (P1), and the linear relaxation of the (P1) is easier to be obtained and need not introduce new variables and constraints compared with the method in [5, 8]. (3) A new pruning technique is given, and this technique offers the possibility to cut away a large part of the current investigated feasible region. Using the new technique as an accelerating device and applying it to the proposed algorithm, we can largely reduce current investigated feasible region to improve the convergence of the algorithm. (4) The proposed algorithm is convergent to the global minimum through the successive refinement of the linear relaxation of feasible region of the objective function and solutions of a series of (RLP). Finally, the numerical results show the feasibility and effectiveness of the proposed algorithm.

The organization of this article is as follows. In Section 2, we show how to convert the (P) into equivalent problem (P1), and generate the relaxed linear programing (RLP) of the (P1). In Section 3, the proposed branch-and-bound algorithm in which the relaxed subproblems are embedded is described, and its convergence is shown. Some numerical results are reported in Section 4 and Section 5 provides some concluding remarks.

#### 2. Linear Relaxation Programing

In this section, first we convert the (P) into an equivalent nonconvex programing problem (P1). In order to globally solve the (P), the branch and bound algorithm to be presented can be applied to the (P1).

Firstly, we solve the following linear programing problems:where . Then, we can get initial partition rectangle . Without loss of generality, assume the problem (P) can be rewritten in the following form:Obviously, we have for each , and and for each .

Theorem 2.1. *
Problems (P) and the (P1) have the same global
optimal solution.*

*Proof. * Obviously, if is feasible to the (P), then .
Conversely, if is feasible to the (P1), then .
So they have the same feasible region, then conclusion is followed.

The linear relaxation of the (P1) can be realized by
underestimating function with a linear function .
All the details of this linearization technique for generating relaxations will
be given in the following theorems.

Given any and for all ,
the following notations are introduced:

Theorem 2.2. * Let for any and let and defined in (2.5). Then, for all *

*Proof. * Obviously, by the continuity of the function ,
there must exist some and ,
such that and .
If ,
then and ,
that is, and .
Then, there must exist one positive number such that ,
where ,
which contradicts for any .
Therefore, we have ,
that is, or .
Obviously, for ,
the following two conclusions hold.

(i) If ,
then ,
we have .(ii) If ,
then ,
we have . By (i) and (ii), for for all ,
we have .

Theorem 2.3. * For any ,
consider the functions and defined in (2.3) and (2.4). Then, the following
two statements hold.*

(i)* The functions and satisfy .*(ii)*The maximal errors of bounding using and satisfywhere*

*Proof. * The proof the theorem can be found in [7].

For convenience in exposition, in the following we assume that represents either the initial bounds on the variables of the problem (P1), or modified bounds as defined for some partitioned subproblem in a branch-and-bound scheme. By means of Theorem 2.3, we can give the linear relaxation of the (P1). Let , consequently we construct the corresponding approximation relaxation linear programing (RLP) of (P1) in as follows:

Based on the above linear underestimators, every feasible point of (P1) in subdomain is feasible in (RLP); and the value of the objective function for (RLP) is less than or equal to that of (P1) for all points in . Thus, (RLP) provides a valid lower bound for the solution of (P1) over the partition set . It should be noted that problem (RLP) contains only the necessary constraints to guarantee convergence of the algorithm.

#### 3. New Pruning Technique

In this section, we pay more attention to how to form the new pruning technique to delete or reduce a large part of regions in which there exists no global optimal solution so that we can accelerate the convergence of the proposed algorithm. Let with be a subrectangle of that is, . Moreover, assume that are currently known upper bound of optimal objective value of problem (P1). For convenience of the following discussions, we introduce some notations as follows:

Theorem 3.1. * Consider the subrectangle we have the following conclusions.*

(i)*If then *(ii)*If then if there exists some index such that and then ;
if there exists some index such that and then , where*

*Proof. *(i) By
assumption and definitions of in (3.1), if then for any ,
that is, for any .

(ii) By assumption and definitions of in (3.1), if then we have the following two conclusions:

(a)If there exists some index such that and then for any ,
let and for some ,
we first show that .
By assumption and definitions of and in (3.1), we have , that
is, Furthermore, from the definitions of ,
we getBy the above discussion and assumption, it follows
that for any Therefore, ,
that is, there exists no global optimal solution in .
This proof of part (a) is completed.(b)If there exists some index such that and then ,
since the proof of the part (b) is similar to the part (a) of the theorem, it
is omitted here.

Theorem 3.2. * Consider the subrectangle we have the following conclusions.*

(i)*If there exists some such that on the subrectangle ,
then there exists no global optimization solution
in .*(ii)*If on the subrectangle ,
then if there exists some index such that and ,
then there exists no global optimization solution
in ;
if there exists some index such that and ,
then there exists no global optimization solution
in ,
where*

*Proof. * Since the
proof of Theorem 3.2 is similar to that of Theorem 3.1, it is omitted here.

Based on Theorems 3.1 and 3.2, we can give new pruning technique to cut away or reduce a large part of region in which there exists no optimal solution.

Next, we will show how this new pruning technique is formed, that is, we provide a process that show how a subrectangle can be deleted or reduced, where with . Let denote the discarded interval in . First, we calculate according to (3.1). Then, the eliminated interval can be determined according to the following rules, which is called new pruning technique.

*Rule 1*

If then

*Rule 2*

If then

(a)If and ,
then (b) If and ,
then

*Rule 3*

If for some ,
then

*Rule 4*

If for some ,
then

(c)If and then (d)If and ,
then

Consequently, under some assumption the discarded part of is for some the new rectangle reducing tactics provides the possibility to cut away or reduce a large part of the rectangle that is currently investigated by the procedure. The rest part of denoted by is left for further considering, where if if .

#### 4. Algorithm and Its Convergence

In this section, by connecting the former branch-and-bound algorithm with new pruning technique a global optimization algorithm is proposed for solving problem (P1). This algorithm needs to solve a sequence of relaxation linear programing over partitioned subsets of in order to find a global optimum solution.

The branch and bound approach is based on partitioning the set into sub-hyperrectangles, each concerned with a node of the branch and bound tree, and each node is associated with a relaxation linear subproblem in each sub-hyperrectangle. Hence, at any stage of the algorithm, suppose that we have a collection of active nodes denoted by , say, each associated with a hyperrectangle for all . For each such node , we will have computed a lower bound of the optimal value of (P1) via the solution of the RLP, so that the lower bound of optimal value of (P1) on the whole initial box region at stage is given by for all . Whenever the solution of the relaxation linear programing (RLP) turns out to be feasible to the problem (P1), we update the upper bound of incumbent solution if necessary. Then, the active nodes collection will satisfy for all , for each stage . We now select an active node to partition its associated hyperrectangle into two sub-hyperrectangles as described below, computing the lower bounds for each new node as before. Upon fathoming any nonimproving nodes, we obtain a collection of active nodes for the next stage, and this process is repeated until convergence is obtained.

The critical element in guaranteeing convergence to a global minimum is the choice of a suitable partitioning strategy. In our paper, we choose a simple and standard bisection rule. This method is sufficient to ensure convergence since it drives all the intervals to zero for all variables. This branching rule is given as follows.

Assume that the sub-hyperrectangle is going to be divided. Then, we select the branching variable satisfying and partition by bisection the interval into the subintervals and .

The basic steps of the proposed algorithm are summarized as follows. Let refer to the optimal objective function value of P1 for the sub-hyperrectangles and refer to an element of corresponding argmin.

*Algorithm Statement**Step 1 (initialization). *Initialize the iteration counter ,
the set of all active node ,
the upper bound ,
and the set of feasible points .
Solve the problem (RLP) for ,
obtaining and .
If is feasible to (P1) update and ,
if necessary. If ,
where is some accuracy tolerance, then stop with as the prescribed solution to problem (P1).
Otherwise, proceed to Step 2.*Step 2 (midpoint check). *Select the midpoint of ,
if is feasible to the (P1), then .
Define the upper bound .
If ,
the best known feasible point is denoted .*Step 3 (branching). *Choose a branching variable to partition to get two new sub-hyperrectangles according
to the above selected branching rule. Call the set of new partition rectangles
as .*Step 4 (pruning). *(1) If ,
then calculate ,
and for each .
If one of assumption conditions in rules 1–4 is satisfied, then the proposed
new pruning technique can be applied into each ,
the rest parts of and are denoted by and ,
respectively.

(2) If ,
solve RLP to obtain and for each .
If ,
set ,
otherwise, update the best available solution and if possible, as in Step 2.*Step 5 (updating lower bound). *The partition set remaining is now giving a new lower bound .*Step 6 (convergence check). *Fathom any nonimproving nodes by setting .
If then stop with is the solution of (P1), and is an optimal solution. Otherwise, ,
and select an active node such that , ,
and return to Step 2.

*Convergence of The Algorithm*

Let be the set of accumulation points of ,
and let be ,
where is the feasible space of the (P1).

Theorem 4.1. * The above algorithm either terminates finitely with
the incumbent solution being optimal to the (P1) or generates an infinite
sequence of iteration such that along any infinite branch of the branch and bound
tree, any accumulation point of the sequence will be global minimum of the (P1).*

*Proof. * If the above proposed algorithm terminates finitely
at some iteration ,
obviously, is global optimal value and is optimal solution for the (P1). If the
algorithm is infinite, it generates at least one infinitely sequence such that for any .
In the case, since partition sets used by the proposed algorithm are all
rectangular and compact, by Tuy [9], it follows that this rectangular
subdivision is exhaustive. Hence, for every iteration, ,
by design of the algorithm, we haveHorst [10] gives that is a nondecreasing sequence bounded above by ,
which guarantees the existence of the limit is a sequence on a compact set, therefore, it
has a convergent subsequence. For any ,
suppose that there exists a subsequence of with .
By the proposed algorithm and [9], it follows that the subdivision of partition
sets in Step 3 is exhaustive on ,
and the selection of elements to be partitioned in Step 3 is bound improving.
Thus, there exists a decreasing subsequence ,
where with From the construction method of linear lower
bound relaxation functions for objective function of the (P1), we know that the
linear subfunctions used in (RLP) are strongly consistent on .
Thus, it follows that .

#### 5. Numerical Experiments

To verify performance of the proposed algorithm, some common used test problems are implemented on Pentium IV (433 MHZ) microcomputer. The algorithm is coded in C++ language and each linear programing is solved by simplex method, and the convergence tolerance set to in our experiment. Below, we describe some of these sample problems and solution results are summarized in Table 1. In Table 1, the notations have been used for column headers. Iter: number of algorithm iteration; maxnode: the maximal number of active nodes necessary; time: execution time in seconds; : feasibility tolerance.

*Example 5.1 (see [3]). *We have

*Example 5.2 (see [2]). *We have

If there exists sum of general linear ratios in constraint functions of problem (P1), by using the same linear relaxation method proposed in Section 2, we can construct the corresponding linear relaxation programing of problem (P1). Therefore, the above proposed algorithm can be extensively applied to solve the following sum of linear ratios problem with sum of linear ratios constraints.

*Example 5.3. *We have

*Example 5.4. *We have

From Table 1, numerical results show that our algorithm can globally solve sum of general linear ratios problem (P) on a microcomputer.

#### 6. Concluding Remarks

A global optimization algorithm is proposed for solving sum of general linear ratios problem (P). To globally solve the (P), we first convert (P) into an equivalent problem (P1), then one new linearization method is proposed to construct the linear relaxation programing of the (P1). Then, a new pruning technique is proposed, this technique offers a possibility to cut away a large part of the current investigated feasible region by the algorithm, which can be utilized as an accelerating device for global optimization of problem (P). The proposed algorithm is convergent to the global minimum of (P1) through the successive refinement of linear relaxation of the feasible region and the subsequent solutions of a series of (RLP). Finally, the numerical experiments are given to illustrate the feasibility and effectiveness of the proposed algorithm.

#### Acknowledgment

This paper is supported by the National Science Foundation of China (10671057) and the National Science Foundation of Henan Institute of Science and Technology (06055).

#### References

- H. Konno and H. Watanabe, “Bond portfolio optimization problems and their applications to index tracking: a partial optimization approach,”
*Journal of the Operations Research Society of Japan*, vol. 39, no. 3, pp. 295–306, 1996. View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet - W. Yanjun, S. Peiping, and L. Zhian, “A branch-and-bound algorithm to globally solve the sum of several linear ratios,”
*Applied Mathematics and Computation*, vol. 168, no. 1, pp. 89–101, 2005. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet - N. T. Hoai Phuong and H. Tuy, “A unified monotonic approach to generalized linear fractional programming,”
*Journal of Global Optimization*, vol. 26, no. 3, pp. 229–259, 2003. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet - T. Kuno, “A branch-and-bound algorithm for maximizing the sum of several linear ratios,”
*Journal of Global Optimization*, vol. 22, no. 1–4, pp. 155–174, 2002. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet - H. Jiao, Y. Guo, and P. Shen, “Global optimization of generalized linear fractional programming with nonlinear constraints,”
*Applied Mathematics and Computation*, vol. 183, no. 2, pp. 717–728, 2006. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet - Y. Ji, K.-C. Zhang, and S.-J. Qu, “A deterministic global optimization algorithm,”
*Applied Mathematics and Computation*, vol. 185, no. 1, pp. 382–387, 2007. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet - H. Jiao and Y. Chen, “A note on a deterministic global optimization algorithm,”
*Applied Mathematics and Computation*, vol. 202, no. 1, pp. 67–70, 2008. View at Publisher · View at Google Scholar · View at MathSciNet - P. Shen and C.-F. Wang, “Global optimization for sum of linear ratios problem with coefficients,”
*Applied Mathematics and Computation*, vol. 176, no. 1, pp. 219–229, 2006. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet - H. Tuy, “Effect of the subdivision strategy on convergence and efficiency of some global optimization algorithms,”
*Journal of Global Optimization*, vol. 1, no. 1, pp. 23–36, 1991. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet - R. Horst, “Deterministic global optimization with partition sets whose feasibility is not known: application to concave minimization, reverse convex constraints, DC-programming, and Lipschitzian optimization,”
*Journal of Optimization Theory and Applications*, vol. 58, no. 1, pp. 11–37, 1988. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet