Abstract

This paper presents a new global optimization algorithm for solving a class of linear multiplicative programming (LMP) problem. First, a new linear relaxation technique is proposed. Then, to improve the convergence speed of our algorithm, two pruning techniques are presented. Finally, a branch and bound algorithm is developed for solving the LMP problem. The convergence of this algorithm is proved, and some experiments are reported to illustrate the feasibility and efficiency of this algorithm.

1. Introduction

Consider the following linear multiplicative programming (LMP) problem:LMP:

where ,  ,  ,  , ,   is a matrix, is a vector, and is nonempty and bounded.

As a special case of nonconvex programming problem, the problem LMP has been paid more attention since the 1990s. There are two reasons. The first one is that, from a practical point of view, LMP problem appears in a wide variety of practical applications, such as financial optimization [1], data mining/pattern recognition [2], plant layout design [3], VLISI chip design [4], and robust optimization [5]. The second one is that, from a research point of view, LMP is N-hard; that is, it usually possesses multiple local optimal solutions that are not globally optimal. So, it is hard to find its global optimal solution, and it is necessary to put forward good methods.

In the past few decades, for all , under the assumption that ,  , a number of practical algorithms have been proposed for globally solving problem LMP. These methods can be classified into parameterization based methods [6, 7], branch-and-bound methods [810], decomposition method [11], cutting plane method [12], and so on.

The purpose of this paper is to present an effective method for globally solving problem LMP. Compared with other algorithms, the main features of this algorithm are (1) by using the special structure of LMP, a new linear relaxation technique is presented, which can be used to construct the linear relaxation programming (LRP) problem; (2) two pruning techniques are presented, which can be used to improve the convergence speed of the proposed algorithm; (3) the problem investigated in this paper has a more general form than those in [612]; it does not require and ; (4) numerical results and comparison with methods [8, 1322] show that our algorithm works as well as or better than those methods.

This paper is organized as follows. In Section 2, the new linear relaxation programming (LRP) problem for LMP problem is proposed, which provides a lower bound for the optimal value of LMP. In order to improve the convergence speed of our algorithm, two pruning techniques are presented in Section 3. In Section 4, the global optimization algorithm is given, and the convergence of the algorithm is proved. Numerical experiments are carried out to show the feasibility and efficiency of our algorithm in Section 5.

2. Linear Relaxation Programming (LRP)

To solve problem LMP, the principal task is the construction of lower bound for this problem and its partitioned subproblems. A lower bound of LMP problem and its partitioned subproblems can be obtained by solving a linear relaxation programming problem. For generating this linear relaxation, the strategy proposed by this paper is to underestimate the objective function with a linear function. All the details of this procedure will be given in the following.

First, we solve linear programming problems: , , , and construct a rectangle . Then, the LMP problem can be rewritten as the following form:LMP:

Let be the initial rectangle or some subrectangle of that is generated by the proposed algorithm. Next, we will show how to construct the linear relaxation programming problem LRP for LMP.

Towards this end, for , computeand consider the product term in .

Since ,  , we have that is, Furthermore, we have

In addition, since ,  , we have Furthermore, we can obtain

From (6) and (8), we have the following relations:

Based on the above discussion, the linear relaxation programming (LRP) problem can be established as follows, which provides a lower bound for the optimal value of LMP problem over :LRP:

Theorem 1. For all , let and consider the functions ,  , and . Then, one has .

Proof. We first prove . By the definitions and , we haveSice is nonempty and bounded, there exists such that . From the above inequality, we have By the definitions of and , we know that as . Combining (12), we have .
Similarly, we can prove , and the proof is complete.

Theorem 1 implies that and will approximate the function as .

3. Pruning Technique

To improve the convergence speed of this algorithm, we present two pruning techniques, which can be used to eliminate the region in which the global optimal solution of LMP problem does not exist.

Assume that and are the current known upper bound and lower bound of the optimal value of the problem LMP. Let

The pruning techniques are derived as in the following theorems.

Theorem 2. For any subrectangle with , if there exists some index such that and , then there is no globally optimal solution of LMP problem over ; if and , for some , then there is no globally optimal solution of LMP problem over , where

Proof. For all , we first show that . Consider the th component of . Since , we can obtain thatFrom , we have . For all , by the above inequality and the definition of , it implies that that is,Thus, for all , we have ; that is, for all , is always greater than the optimal value of the problem LMP. Therefore, there cannot exist globally optimal solution of LMP problem over .
Similarly, for all , if there exists some such that and , it can be derived that there is no globally optimal solution of LMP problem over .

Theorem 3. For any subrectangle with , if there exists some index such that and , then there is no globally optimal solution of LMP problem over ; if and , for some , then there is no globally optimal solution of LMP problem over , where

Proof. First, we show that, for all , . Consider the th component of . By the assumption and the definitions of and , we have Note that since , we have . For all , by the above inequality and the definition of , it implies thatThus, for all , we have . Therefore, there cannot exist globally optimal solution of LMP problem over .
For all , if there exists some such that and , from arguments similar to the above, it can be derived that there is no globally optimal solution of LMP problem over .

4. Algorithm and Its Convergence

Based on the previous results, this section presents the branch and bound algorithm and gives its convergence.

4.1. Branching Rule

In branch and bound algorithm, branch rule is a critical element in guaranteeing convergence. This paper chooses a simple and standard bisection rule, which is sufficient to ensure convergence since it drives the intervals shrinking to a singleton for all the variables along any infinite branch of the branch and bound tree.

Consider any node subproblem identified by rectangle . The branching rule is as follows:(i)let ;(ii)let ;(iii)let

Through using this branching rule, the rectangle is partitioned into two subrectangles and .

4.2. Branch and Bound Algorithm

From the above discussion, the branch and bound algorithm for globally solving LMP problem is summarized as follows.

Let be the optimal function value of LRP over the subrectangle and be an element of the corresponding argmin.

Algorithm Statement

Step 1. Choose . Find an optimal solution and the optimal value for problem LRP with . Set , and . If , then stop: is an -optimal solution of problem LMP. Otherwise, set , , and go to Step 2.

Step 2. Set . Subdivide into two subrectangles via the branching rule, and denote the set of new partition rectangles as .

Step 3. For each new rectangle , utilizing the pruning techniques of Theorems 2 and 3 to prune rectangle . For , if there exists some such that over rectangle , then remove the rectangle from ; that is, .

Step 4. If , solve LRP to obtain and for each . If , set . Otherwise, let . If , set .

Step 5. Set

Step 6. Set . Let be the subrectangle which satisfies that . If , then stop: is a global -optimal solution of problem LMP. Otherwise, set , and go to Step 2.

4.3. Convergence Analysis

In this subsection, the convergence properties of the algorithm are given.

Theorem 4. The algorithm either terminates finitely with globally -optimal solution or generates an infinite sequence , where any accumulation point is a globally optimal solution of LMP.

Proof. If the algorithm terminates finitely, without loss of generality, assume that the algorithm terminates at the th step; by the algorithm, we have So, is a global optimal solution of the problem LMP.
If the algorithm is infinite, then an infinite sequence will be generated. Since the feasible region of LMP is bounded, the sequence must have a convergence subsequence. Without loss of generality, set . By the algorithm, we haveSince is a feasible solution of problem LMP, . Taken together, the following relation holds:On the other hand, by the algorithm and the continuity of , we have By Theorem 1, we can obtain Therefore, ; that is is a global optimal solution of problem LMP.

5. Numerical Experiments

To verify the performance of the proposed algorithm, some numerical experiments are carried out and compared with some other methods [8, 1322]. The algorithm is compiled with Matlab 7.1 on a Pentium IV (3.06 GHZ) microcomputer. The simplex method is applied to solve the linear relaxation programming problems. In our experiments, for Examples 110, the convergence tolerance is ; for Example 11, the convergence tolerance is .

The results of problems  1–10 are summarized in Table 1, where the following notations have been used in row headers: Iter is the number of algorithm iterations; Time (s) is execution time in seconds. Except for the results of our algorithm, the results of the other eleven algorithms are taken directly from the corresponding references. In Table 1, “—” denotes the corresponding value is not available.

For problems  1–10, the efficiency of the algorithm proposed by this paper (named Algorithm  1) and the algorithm proposed by this paper but without using the pruning techniques (named Algorithm  2) is compared. The comparison results are given in Table 2.

Example 1 (see [1315]).

Example 2 (see [13, 16]).

Example 3 (see [13, 17]).

Example 4 (see [8, 18]).

Example 5 (see [19]).

Example 6 (see [19]).

Example 7 (see [20]).

Example 8 (see [21]).

Example 9 (see [22]).

Example 10 (see [22]). To further verify the effectiveness of Algorithm  1, a random problem with variable scale is constructed, which is defined as follows.

Example 11. where the real elements of , , , are pseudorandomly generated in the range ; the real elements of , are pseudorandomly generated in the range . For Example 11, Algorithms  1 and  2 are used to solve 10 different random instances for each size and present statistics of the results. The computational results are summarized in Table 3, where the following notations have been used in row headers: Avg.Iter is the average number of iterations; Avg.Time is the average execution time in seconds; is the number of constraints; is the number of variables.

From Table 1, it can be seen that our algorithm can determine the global optimal solution more effectively than that of [8, 1322] in most cases. For Examples 8 and 9, although the number of iterations of our algorithm is more than that of the literatures [21, 22], the optimal values and optimal solutions obtained by our algorithm are better than them.

From Table 2, it can be seen that, for Examples 16, Algorithms  1 and  2 all only need one iteration to find the optimal solution; the advantage of Algorithm  1 is not reflected. However, for Examples 710, the performance of Algorithm  1 is better than that of Algorithm  2.

From Table 3, we can see that, for small scale problems, the advantage of Algorithm  1 is not much better than the Algorithm  2, but with the increase of the scale of the problem, the advantage of Algorithm  1 is more and more powerful. For example, when , the average running time of Algorithms  1 and  2 is 17.0344 and 29.5778, respectively; the average iterations of Algorithms  1 and  2 are 17.6 and 30.4. However, when , the average running time of Algorithms  1 and  2 is 42.8972 and 402.0530, respectively; the average iterations of Algorithms  1 and  2 are 56.8 and 215.3. It is clear that the efficiency of Algorithm  1 is much better than that of Algorithm  2 for large scale problems. In addition, from Table 3, we also can see that, compared with and , the impact of on our algorithm is even greater; the Avg.Time and Avg.Iter of Algorithm  1 are not increased significantly with the increase of the problem size.

The comparison results of Tables 2 and 3 show that the pruning techniques are very good at improving the convergence speed of our algorithm.

The test results show that our algorithm is both feasible and efficient.

Competing Interests

The authors declare that there is no conflict of interests regarding the publication of this paper.

Acknowledgments

The research was supported by NSFC (U1404105); the Key Scientific and Technological Project of Henan Province (142102210058); the Doctoral Scientific Research Foundation of Henan Normal University (qd12103); the Youth Science Foundation of Henan Normal University (2013qk02); Henan Normal University National Research Project to Cultivate the Funded Projects (01016400105); the Henan Normal University Youth Backbone Teacher Training.