Abstract

We present a branch and bound algorithm for globally solving the sum of ratios problem. In this problem, each term in the objective function is a ratio of two functions which are the sums of the absolute values of affine functions with coefficients. This problem has an important application in financial optimization, but the global optimization algorithm for this problem is still rare in the literature so far. In the algorithm we presented, the branch and bound search undertaken by the algorithm uses rectangular partitioning and takes place in a space which typically has a much smaller dimension than the space to which the decision variables of this problem belong. Convergence of the algorithm is shown. At last, some numerical examples are given to vindicate our conclusions.

1. Introduction

The sum of ratios problem has attracted considerable attention in the literature because of its large number of practical applications in various fields of study, including transportation planning, government contracting, economics, and finances [16]. And from a research point of view, the sum of ratios problem poses significant theoretical and computational challenges. This is mainly due to the fact that it is known to generally possess multiple local optima that are not globally optimal.

Many solution algorithms have been proposed for globally solving sums of linear ratios problem with linear constraints (see, e.g., [711]). Recently, some algorithms have been developed for solving globally the nonlinear sum of ratios problems; for instance, Freund and Jarre [12] proposed an interior-point approach for the convex-concave ratios with convex constraints; Dai et al. [13] and Pei and Zhu [14] presented two algorithms for the sum of dc ratios; Benson [15, 16] gave two branch and bound algorithms for the concave-convex ratios; Yamamoto and Konno [17] proposed an algorithm for convex-convex ratios; Shen and Jin [18] and Jiao and Shen [19] developed global optimization algorithms for two kinds of nonlinear sum of ratios.

In this paper, we are concerned with the following nonlinear sum of ratios problem: where , is a compact, convex set in , and , , . In addition, we assume that , , , , .

Problem arises when we replace the variance by the absolute deviation as a measure of the variation of a portfolio. And the global optimization algorithm for this problem is still rare in the literature so far. So we believe that this paper is of interest to researchers in both the fields of portfolio optimization and fractional programming.

The purpose of this paper is to present a branch and bound algorithm for globally solving problem . We believe that the proposed algorithm has four potential practical and computational advantages. First, upper bounds are obtained by maximizing the concave envelope of the objective function of problem over rectangles. Second, the proposed algorithm uses rectangles rather than simplices as partition elements, so that branching only takes place in a space of dimension rather than or although the algorithm search is carried out mainly in a space of dimension . Third, we choose a simple and standard bisection rule. This rule is sufficient to ensure convergence since the partition rule is exhaustive. Finally, the upper bounding subproblems are convex programming problems that differ from each other only in the coefficients of certain linear constraints and in the bounds that describe their associated rectangles.

The remainder of this paper is organized as follows. In Section 2, an equivalent problem of problem is given. Next, in Section 3, we construct the function overestimating the value of the sum of ratios. In Section 4, the proposed branch and bound algorithm is described, and the convergence of the algorithm is established. Some numerical results are reported in Section 5. A summary is proposed in the last section.

2. Equivalent Problem

In order to globally solve the problem , first problem can be converted into an equivalent nonconvex programming problem as follows:

Theorem 1. If is a global optimal solution for problem , then , , , and is a global optimal solution for problem . Conversely, if is a global optimal solution for problem , then is a global optimal solution for problem , where , , .

Proof. The proof of this result can be easily followed from the definitions of problems and and is therefore omitted.

Without loss of generality, we assume that , , , , .

Let us define Then problem can be reformulated as follows:

As is well known, the set of complementarity conditions can be represented as a system of linear inequalities by introducing zero-one integer variable [20]: where and , are defined as follows: Then can be transformed into

For , we do with them similarly. Let where

And let So the problem is equivalent to the following problem:

3. Convex Relaxation Programming

The principle construct in the development of a solution procedure for solving is the construction of a convex relaxation programming of for obtaining the upper bound for this problem, as well as for its partitioned subproblems. Such a convex relaxation can be realized by using the concave envelope of the objective function of over an associated rectangle.

To help obtain convex relaxations, the concept of a concave envelope may be defined as follows.

Definition 2 (see [21]). Let be a compact, convex set, and let be upper semicontinuous on . Then is called the concave envelope of on if(i) is a concave function on ,(ii) for all ,(iii)there is no function satisfying (i) and (ii) such that for some point .

The following theorem is obtained from the definition above.

Theorem 3. Consider a rectangle of , where satisfy , . For any , we define function ; then the concave envelope of the function is given by

Proof. This result is essentially shown in [15] and is therefore omitted.
In order to obtain an upper bound of the optimal value to by solving a convex programming, we can utilize Theorem 3 and convex the reverse convex constraints to problem such that a convex program is given by

4. Branch and Bound Algorithm

In this section, a branch and bound algorithm is developed to solve based on the former convex relaxation method. This algorithm needs to solve a sequence of convex relaxation programming problems about rectangle or the subrectangle of to find a global solution.

4.1. Rectangular Partition Rule

The critical element in guaranteeing convergence to a global maximum of is the choice of a suitable partitioning strategy. In this paper, we choose a simple and standard bisection rule. This rule is sufficient to ensure convergence since it derives all the intervals shrinking to a singleton for all the variables along any infinite branch of the branch and bound tree. Assume that, at each stage of the branch and bound algorithm, or a subrectangle of is subdivided into two rectangles by the branching process. To explain this process, assume without loss of generality that or a subrectangle of to be divided is . This branching rule is as follows.(i)Let .(ii)Let .(iii)Let It follows easily that this branching process is exhaustive.

We are now ready to formally state the overall algorithm for globally solving problem . The basic steps of the algorithm are summarized in the following statement.

4.2. Algorithm Statement

Step 1 (initialization). Given a convergence tolerance . Set the iteration counter , the set of all active nodes , the lower bound , and the set of feasible points .
Solve the convex relaxation programming problem and obtain the optimal value and an optimal solution . Set , , and . Update .
If , then stop with which is the globally -optimal solution and is the optimal value to problem . Otherwise, proceed to Step 2.

Step 2 (branching). According to the above selected branching rule, partition into two new rectangles. Call the set of new partition rectangles .
For each , solve convex programming problem to obtain optimal value and optimal solution of the problem . If , then remove the corresponding subrectangle from , that is, , and skip to the next element of .
If , go to Step 3. Otherwise, update , and set ; the best known feasible point is denoted by .

Step 3 (updating upper bound). Denote the partition set remaining as giving a new upper bound .

Step 4 (convergence check). Fathom any improving nodes by setting . If , then stop: is the optimal value, and are global -optimal solutions for problem , respectively. Otherwise, set and return to Step 2.

4.3. Convergence Analysis

Next, we will give the convergence properties of the algorithm.

Theorem 4. (a) If the algorithm is finite, then, upon termination, is a global -optimal solution to problem .
(b) If the algorithm is infinite, then every accumulation point of an infinite feasible solutions sequence to problem generated by the algorithm is a global optimal solution to problem .

Proof. (a) If the algorithm is finite, then it terminates in Step , . Upon termination, since is found by solving problem for some , is a feasible solution to problem . Upon termination of the algorithm, is satisfied. It is easy to show by standard arguments for branch and bound algorithm that Since is a feasible solution for problem , we have Taken together, the three previous statements imply that Therefore, and the proof of part (a) is complete.
(b) Assume that the algorithm is infinite, by [21]; then a sufficient condition for a global optimization to be convergent to the global maximum requires that the bounding operation must be consistent and the selection operation is bound improving.
A bounding operation is called consistent if at every step any unfathomed partition can be further refined and if any infinitely decreasing sequence of successively refined partition elements satisfies where is a computed upper bound in Step and LB is the best lower bound at iteration not necessarily occurring inside the same subrectangle with . Now, we show that (16) holds.
Since the employed subdivision process is rectangle bisection, the process is exhaustive. Consequently, from the relation , where and denote the optimal values of problem and over the rectangle , respectively, the formulation holds, and this implies that the employed bounding operation is consistent.
A selection operation is called bound improving if at least one partition element where the actual upper bound is attained is selected for further partition after a finite number of refinements. Clearly, the employed selection operation is bound improving because the partition element where the actual upper bound is attained is selected for further partition in the immediately following iteration.
From the above discussion, the branch and bound algorithm proposed in this paper is convergent to the global maximum of .

5. Computational Results

We conducted numerical experiments on the branch and bound algorithm on a Pentium IV microcomputer and the algorithm was coded in Fortran 95. Although these problems have a relatively small number of variables, they are quite challenging. For all test problems, numerical results show that the proposed global optimization algorithm can solve these problems efficiently. Computational results are illustrated in Tables 1 and 2.

In Tables 1 and 2, some notations have been used for column headers: Iter: the number of the algorithm iterations; Max-node: the maximal number of the active nodes necessary; Time: the execution time in seconds, where when the execution time is very short (e.g., Time < 0.1 second), we record with 0 second in short.

We choose the following two types of sum of ratios problems to test our algorithm, which are generated randomly.

Problem 5. Consider where is an integer number (e.g., is taken to be ), is generated randomly in the interval , and , while , are corresponding values calculated by an appropriate factor model [22].

Problem 6. Consider where and are integer numbers (e.g., they are taken to be , resp.) and , , , and are all generated by using random numbers in the intervals , , , and , respectively. and are randomly generated according to the normal distribution .

For solving the above test Problems 5 and 6, we utilized the proposed algorithm, the convergence tolerance parameters are set as , and the corresponding numerical results are listed in Tables 1 and 2, respectively. Average percentages are obtained by running the algorithm for 10 test problems. Tables 1 and 2 show the variation in the average number of computational results required when was changed in and was changed in . From Tables 1 and 2 we see that the algorithm works better for smaller . So the size of is the main factor affecting the performance of the algorithm. This is mainly because branching in the subproblem is proportional to . Also, the time increases as increases, but not as sharply as .

6. Conclusion

We have presented and validated a branch and bound algorithm for global sums of ratios problem , where each term in the objective function is a ratio of two functions which are the sums of the absolute values of affine functions with coefficients. This problem computes the upper bounds by solving convex programming problems. These problems are derived by using the concave envelope of the objective function. The convergence of the algorithm is proved, and computational results for several test problems have been reported to show the feasibility and efficiency of the proposed algorithm.

Conflict of Interests

The authors declare that there is no conflict of interests regarding the publication of this paper.

Acknowledgments

The authors are grateful to the responsible editor and the anonymous referees for their valuable comments and suggestions, which have greatly improved the earlier version of this paper.