Table of Contents Author Guidelines Submit a Manuscript
Scientific Programming
Volume 2016, Article ID 3204368, 9 pages
http://dx.doi.org/10.1155/2016/3204368
Research Article

Global Optimization for Solving Linear Multiplicative Programming Based on a New Linearization Method

1Department of Mathematics, College of Sciences, Shanghai University, Shanghai 200444, China
2Department of Mathematics, Henan Normal University, Xinxiang 453007, China

Received 22 April 2016; Revised 22 July 2016; Accepted 31 July 2016

Academic Editor: Fabrizio Riguzzi

Copyright © 2016 Chun-Feng Wang and Yan-Qin Bai. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Abstract

This paper presents a new global optimization algorithm for solving a class of linear multiplicative programming (LMP) problem. First, a new linear relaxation technique is proposed. Then, to improve the convergence speed of our algorithm, two pruning techniques are presented. Finally, a branch and bound algorithm is developed for solving the LMP problem. The convergence of this algorithm is proved, and some experiments are reported to illustrate the feasibility and efficiency of this algorithm.

1. Introduction

Consider the following linear multiplicative programming (LMP) problem:LMP:

where ,  ,  ,  , ,   is a matrix, is a vector, and is nonempty and bounded.

As a special case of nonconvex programming problem, the problem LMP has been paid more attention since the 1990s. There are two reasons. The first one is that, from a practical point of view, LMP problem appears in a wide variety of practical applications, such as financial optimization [1], data mining/pattern recognition [2], plant layout design [3], VLISI chip design [4], and robust optimization [5]. The second one is that, from a research point of view, LMP is N-hard; that is, it usually possesses multiple local optimal solutions that are not globally optimal. So, it is hard to find its global optimal solution, and it is necessary to put forward good methods.

In the past few decades, for all , under the assumption that ,  , a number of practical algorithms have been proposed for globally solving problem LMP. These methods can be classified into parameterization based methods [6, 7], branch-and-bound methods [810], decomposition method [11], cutting plane method [12], and so on.

The purpose of this paper is to present an effective method for globally solving problem LMP. Compared with other algorithms, the main features of this algorithm are (1) by using the special structure of LMP, a new linear relaxation technique is presented, which can be used to construct the linear relaxation programming (LRP) problem; (2) two pruning techniques are presented, which can be used to improve the convergence speed of the proposed algorithm; (3) the problem investigated in this paper has a more general form than those in [612]; it does not require and ; (4) numerical results and comparison with methods [8, 1322] show that our algorithm works as well as or better than those methods.

This paper is organized as follows. In Section 2, the new linear relaxation programming (LRP) problem for LMP problem is proposed, which provides a lower bound for the optimal value of LMP. In order to improve the convergence speed of our algorithm, two pruning techniques are presented in Section 3. In Section 4, the global optimization algorithm is given, and the convergence of the algorithm is proved. Numerical experiments are carried out to show the feasibility and efficiency of our algorithm in Section 5.

2. Linear Relaxation Programming (LRP)

To solve problem LMP, the principal task is the construction of lower bound for this problem and its partitioned subproblems. A lower bound of LMP problem and its partitioned subproblems can be obtained by solving a linear relaxation programming problem. For generating this linear relaxation, the strategy proposed by this paper is to underestimate the objective function with a linear function. All the details of this procedure will be given in the following.

First, we solve linear programming problems: , , , and construct a rectangle . Then, the LMP problem can be rewritten as the following form:LMP:

Let be the initial rectangle or some subrectangle of that is generated by the proposed algorithm. Next, we will show how to construct the linear relaxation programming problem LRP for LMP.

Towards this end, for , computeand consider the product term in .

Since ,  , we have that is, Furthermore, we have

In addition, since ,  , we have Furthermore, we can obtain

From (6) and (8), we have the following relations:

Based on the above discussion, the linear relaxation programming (LRP) problem can be established as follows, which provides a lower bound for the optimal value of LMP problem over :LRP:

Theorem 1. For all , let and consider the functions ,  , and . Then, one has .

Proof. We first prove . By the definitions and , we haveSice is nonempty and bounded, there exists such that . From the above inequality, we have By the definitions of and , we know that as . Combining (12), we have .
Similarly, we can prove , and the proof is complete.

Theorem 1 implies that and will approximate the function as .

3. Pruning Technique

To improve the convergence speed of this algorithm, we present two pruning techniques, which can be used to eliminate the region in which the global optimal solution of LMP problem does not exist.

Assume that and are the current known upper bound and lower bound of the optimal value of the problem LMP. Let

The pruning techniques are derived as in the following theorems.

Theorem 2. For any subrectangle with , if there exists some index such that and , then there is no globally optimal solution of LMP problem over ; if and , for some , then there is no globally optimal solution of LMP problem over , where

Proof. For all , we first show that . Consider the th component of . Since , we can obtain thatFrom , we have . For all , by the above inequality and the definition of , it implies that that is,Thus, for all , we have ; that is, for all , is always greater than the optimal value of the problem LMP. Therefore, there cannot exist globally optimal solution of LMP problem over .
Similarly, for all , if there exists some such that and , it can be derived that there is no globally optimal solution of LMP problem over .

Theorem 3. For any subrectangle with , if there exists some index such that and , then there is no globally optimal solution of LMP problem over ; if and , for some , then there is no globally optimal solution of LMP problem over , where

Proof. First, we show that, for all , . Consider the th component of . By the assumption and the definitions of and , we have Note that since , we have . For all , by the above inequality and the definition of , it implies thatThus, for all , we have . Therefore, there cannot exist globally optimal solution of LMP problem over .
For all , if there exists some such that and , from arguments similar to the above, it can be derived that there is no globally optimal solution of LMP problem over .

4. Algorithm and Its Convergence

Based on the previous results, this section presents the branch and bound algorithm and gives its convergence.

4.1. Branching Rule

In branch and bound algorithm, branch rule is a critical element in guaranteeing convergence. This paper chooses a simple and standard bisection rule, which is sufficient to ensure convergence since it drives the intervals shrinking to a singleton for all the variables along any infinite branch of the branch and bound tree.

Consider any node subproblem identified by rectangle . The branching rule is as follows:(i)let ;(ii)let ;(iii)let

Through using this branching rule, the rectangle is partitioned into two subrectangles and .

4.2. Branch and Bound Algorithm

From the above discussion, the branch and bound algorithm for globally solving LMP problem is summarized as follows.

Let be the optimal function value of LRP over the subrectangle and be an element of the corresponding argmin.

Algorithm Statement

Step 1. Choose . Find an optimal solution and the optimal value for problem LRP with . Set , and . If , then stop: is an -optimal solution of problem LMP. Otherwise, set , , and go to Step 2.

Step 2. Set . Subdivide into two subrectangles via the branching rule, and denote the set of new partition rectangles as .

Step 3. For each new rectangle , utilizing the pruning techniques of Theorems 2 and 3 to prune rectangle . For , if there exists some such that over rectangle , then remove the rectangle from ; that is, .

Step 4. If , solve LRP to obtain and for each . If , set . Otherwise, let . If , set .

Step 5. Set

Step 6. Set . Let be the subrectangle which satisfies that . If , then stop: is a global -optimal solution of problem LMP. Otherwise, set , and go to Step 2.

4.3. Convergence Analysis

In this subsection, the convergence properties of the algorithm are given.

Theorem 4. The algorithm either terminates finitely with globally -optimal solution or generates an infinite sequence , where any accumulation point is a globally optimal solution of LMP.

Proof. If the algorithm terminates finitely, without loss of generality, assume that the algorithm terminates at the th step; by the algorithm, we have So, is a global optimal solution of the problem LMP.
If the algorithm is infinite, then an infinite sequence will be generated. Since the feasible region of LMP is bounded, the sequence must have a convergence subsequence. Without loss of generality, set . By the algorithm, we haveSince is a feasible solution of problem LMP, . Taken together, the following relation holds:On the other hand, by the algorithm and the continuity of , we have By Theorem 1, we can obtain Therefore, ; that is is a global optimal solution of problem LMP.

5. Numerical Experiments

To verify the performance of the proposed algorithm, some numerical experiments are carried out and compared with some other methods [8, 1322]. The algorithm is compiled with Matlab 7.1 on a Pentium IV (3.06 GHZ) microcomputer. The simplex method is applied to solve the linear relaxation programming problems. In our experiments, for Examples 110, the convergence tolerance is ; for Example 11, the convergence tolerance is .

The results of problems  1–10 are summarized in Table 1, where the following notations have been used in row headers: Iter is the number of algorithm iterations; Time (s) is execution time in seconds. Except for the results of our algorithm, the results of the other eleven algorithms are taken directly from the corresponding references. In Table 1, “—” denotes the corresponding value is not available.

Table 1: Comparison results of Examples 110.

For problems  1–10, the efficiency of the algorithm proposed by this paper (named Algorithm  1) and the algorithm proposed by this paper but without using the pruning techniques (named Algorithm  2) is compared. The comparison results are given in Table 2.

Table 2: Comparison results of Algorithms and for Examples 110.

Example 1 (see [1315]).

Example 2 (see [13, 16]).

Example 3 (see [13, 17]).

Example 4 (see [8, 18]).

Example 5 (see [19]).

Example 6 (see [19]).

Example 7 (see [20]).

Example 8 (see [21]).

Example 9 (see [22]).

Example 10 (see [22]). To further verify the effectiveness of Algorithm  1, a random problem with variable scale is constructed, which is defined as follows.

Example 11. where the real elements of , , , are pseudorandomly generated in the range ; the real elements of , are pseudorandomly generated in the range . For Example 11, Algorithms  1 and  2 are used to solve 10 different random instances for each size and present statistics of the results. The computational results are summarized in Table 3, where the following notations have been used in row headers: Avg.Iter is the average number of iterations; Avg.Time is the average execution time in seconds; is the number of constraints; is the number of variables.

Table 3: Comparison results of Algorithms and for Example 11.

From Table 1, it can be seen that our algorithm can determine the global optimal solution more effectively than that of [8, 1322] in most cases. For Examples 8 and 9, although the number of iterations of our algorithm is more than that of the literatures [21, 22], the optimal values and optimal solutions obtained by our algorithm are better than them.

From Table 2, it can be seen that, for Examples 16, Algorithms  1 and  2 all only need one iteration to find the optimal solution; the advantage of Algorithm  1 is not reflected. However, for Examples 710, the performance of Algorithm  1 is better than that of Algorithm  2.

From Table 3, we can see that, for small scale problems, the advantage of Algorithm  1 is not much better than the Algorithm  2, but with the increase of the scale of the problem, the advantage of Algorithm  1 is more and more powerful. For example, when , the average running time of Algorithms  1 and  2 is 17.0344 and 29.5778, respectively; the average iterations of Algorithms  1 and  2 are 17.6 and 30.4. However, when , the average running time of Algorithms  1 and  2 is 42.8972 and 402.0530, respectively; the average iterations of Algorithms  1 and  2 are 56.8 and 215.3. It is clear that the efficiency of Algorithm  1 is much better than that of Algorithm  2 for large scale problems. In addition, from Table 3, we also can see that, compared with and , the impact of on our algorithm is even greater; the Avg.Time and Avg.Iter of Algorithm  1 are not increased significantly with the increase of the problem size.

The comparison results of Tables 2 and 3 show that the pruning techniques are very good at improving the convergence speed of our algorithm.

The test results show that our algorithm is both feasible and efficient.

Competing Interests

The authors declare that there is no conflict of interests regarding the publication of this paper.

Acknowledgments

The research was supported by NSFC (U1404105); the Key Scientific and Technological Project of Henan Province (142102210058); the Doctoral Scientific Research Foundation of Henan Normal University (qd12103); the Youth Science Foundation of Henan Normal University (2013qk02); Henan Normal University National Research Project to Cultivate the Funded Projects (01016400105); the Henan Normal University Youth Backbone Teacher Training.

References

  1. C. D. Maranas, I. P. Androulakis, C. A. Floudas, A. J. Berger, and J. M. Mulvey, “Solving long-term financial planning problems via global optimization,” Journal of Economic Dynamics and Control, vol. 21, no. 8-9, pp. 1405–1425, 1997. View at Publisher · View at Google Scholar · View at MathSciNet
  2. K. P. Bennett and O. L. Mangasarian, “Bilinear separation of two sets in n-space,” Computational Optimization and Applications, vol. 2, no. 3, pp. 207–227, 1993. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  3. I. Quesada and I. E. Grossmann, “Alternative bounding approximations for the global optimization of various engineering design problems,” in Global Optimization in Engineering Design, I. E. Grossmann, Ed., vol. 9 of Nonconvex Optimization and Its Applications, pp. 309–331, Kluwer Academic Publishers, Norwell, Mass, USA, 1996. View at Publisher · View at Google Scholar
  4. M. C. Dorneich and N. V. Sahinidis, “Global optimization algorithms for chip layout and compaction,” Engineering Optimization, vol. 25, no. 2, pp. 131–154, 1995. View at Publisher · View at Google Scholar
  5. J. M. Mulvey, R. J. Vanderbei, and S. A. Zenios, “Robust optimization of large-scale systems,” Operations Research, vol. 43, no. 2, pp. 264–281, 1995. View at Publisher · View at Google Scholar · View at MathSciNet
  6. H. Konno, T. Kuno, and Y. Yajima, “Global minimization of a generalized convex multiplicative function,” Journal of Global Optimization, vol. 4, no. 1, pp. 47–62, 1994. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  7. T. Kuno, “Solving a class of multiplicative programs with 0−1 knapsack constraints,” Journal of Optimization Theory and Applications, vol. 103, no. 1, pp. 121–135, 1999. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  8. H.-S. Ryoo and N. V. Sahinidis, “Global optimization of multiplicative programs,” Journal of Global Optimization, vol. 26, no. 4, pp. 387–418, 2003. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  9. P. Shen and H. Jiao, “Linearization method for a class of multiplicative programming with exponent,” Applied Mathematics and Computation, vol. 183, no. 1, pp. 328–336, 2006. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet · View at Scopus
  10. X.-G. Zhou and K. Wu, “A method of acceleration for a class of multiplicative programming problems with exponent,” Journal of Computational and Applied Mathematics, vol. 223, no. 2, pp. 975–982, 2009. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  11. R. Horst and H. Tuy, Global Optimization: Deterministic Approaches, Springer, Berline, Germany, 2nd edition, 1993. View at Publisher · View at Google Scholar · View at MathSciNet
  12. H. P. Benson and G. M. Boger, “Outcome-space cutting-plane algorithm for linear multiplicative programming,” Journal of Optimization Theory and Applications, vol. 104, no. 2, pp. 301–322, 2000. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet · View at Scopus
  13. C.-F. Wang, S.-Y. Liu, and P.-P. Shen, “Global minimization of a generalized linear multiplicative programming,” Applied Mathematical Modelling, vol. 36, no. 6, pp. 2446–2451, 2012. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  14. Y. L. Gao, C. X. Xu, and Y. T. Yang, “Outcome-space branch and bound algorithm for solving linear multiplicative programming,” in Computational Intelligence and Security: International Conference, CIS 2005, Xi'an, China, December 15–19, 2005, Proceedings Part I, vol. 3801 of Lecture Notes in Computer Science, pp. 675–681, Springer, Berlin, Germany, 2005. View at Publisher · View at Google Scholar
  15. Y. L. Gao, G. R. Wu, and W. M. Ma, “A new global optimization approach for convex multiplicative programming,” Applied Mathematics and Computation, vol. 216, no. 4, pp. 1206–1218, 2010. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  16. S. Schaible and C. Sodini, “Finite algorithm for generalized linear multiplicative programming,” Journal of Optimization Theory and Applications, vol. 87, no. 2, pp. 441–455, 1995. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet · View at Scopus
  17. N. V. Thoai, “A global optimization approach for solving the convex multiplicative programming problem,” Journal of Global Optimization, vol. 1, pp. 341–357, 1991. View at Google Scholar
  18. J. E. Falk and S. W. Palocsay, “Image space analysis of generalized fractional programs,” Journal of Global Optimization, vol. 4, no. 1, pp. 63–88, 1994. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  19. H. Jiao, K. Li, and J. Wang, “An optimization algorithm for solving a class of multiplicative problems,” Journal of Chemical and Pharmaceutical Research, vol. 6, no. 1, pp. 271–277, 2014. View at Google Scholar · View at Scopus
  20. X.-G. Zhou, B.-Y. Cao, and K. Wu, “Global optimization method for linear multiplicative programming,” Acta Mathematicae Applicatae Sinica, vol. 31, no. 2, pp. 325–334, 2015. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  21. X.-G. Zhou, “Global optimization of linear multiplicative programming using univariate search,” in Fuzzy Information & Engineering and Operations Research & Management, B.-Y. Cao and H. Nasseri, Eds., vol. 211 of Advances in Intelligent Systems and Computing, pp. 51–56, 2014. View at Publisher · View at Google Scholar
  22. X.-G. Zhou and B.-Y. Cao, “A simplicial branch and bound duality-bounds algorithm to linear multiplicative programming,” Journal of Applied Mathematics, vol. 2013, Article ID 984168, 10 pages, 2013. View at Publisher · View at Google Scholar · View at MathSciNet