Abstract

Applications of generalized linear multiplicative programming problems (LMP) can be frequently found in various areas of engineering practice and management science. In this paper, we present a simple global optimization algorithm for solving linear multiplicative programming problem (LMP). The algorithm is developed by a fusion of a new convex relaxation method and the branch and bound scheme with some accelerating techniques. Global convergence and optimality of the algorithm are also presented and extensive computational results are reported on a wide range of problems from recent literature and GLOBALLib. Numerical experiments show that the proposed algorithm with a new convex relaxation method is more efficient than usual branch and bound algorithm that used linear relaxation for solving the LMP.

1. Introduction

This paper deals with finding global optimal solutions for generalized linear multiplicative programming problems of the formwhere in objective and constraints are linear functions with general forms , , , , , and are all arbitrary real numbers, , , , is a matrix, is a vector, and set is nonempty and bounded.

Generally, linear multiplicative programming problem (LMP) is a special case of nonconvex programming problem, known to be NP-hard. Linear multiplicative programming problem (LMP) has attracted considerable attention in the literature because of its large number of practical applications in various fields of study, including financial optimization in Konno et al. [1], VLSI chip design in Dorneich and Sahinidis [2], data mining and pattern recognition in Bennett and Mangasarian [3], plant layout design in Quesada and Grossmann [4], marketing and service planning in Samadi et al. [5], robust optimization in Mulvey et al. [6], multiple-objective decision in Benson [7], in Keeney and Raika [8], in Geoffrion [9], location-allocation problems in in Konno et al. [10], constrained bimatrix games in Mangasarian [11], three-dimensional assignment problems in Frieze [12], certain linear max-min problems in Falk [13], and many problems in engineering design, economic management, and operations research. Another reason why this problem attracts so much attention is that, by utilizing some techniques, many mathematical programs like general quadratic programming, bilinear programming, linear multiplicative programming, and quadratically constrained quadratic programming can be converted into the special form of LMP in Benson [14]; in Tuy [15], the important class of sum of ratios fractional problems can also be encapsulated by LMP; in fact, suppose that are convex and are concave and positive. Since each is convex and positive, then the sum of ratios fractional problem (with the objective function as ) reduces to LMP when and are linear. When and are negative, one can add a large real member to make negative be positive because of the compactness of the constraint set in Dai et al. [16].

Moreover, from the algorithmic design point of view, sum of products of two affine functions need not be convex (even not be quasi-convex), and hence linear multiplicative programming problem (LMP) may have multiple local solutions, which fail to be global optimal. Due to the facts above, developing practical global optimization algorithms for the LMP has great theoretical and the algorithmic significance.

In the past 20 years, many solution methods have been proposed for globally solving linear multiplicative programming problem (LMP) and its special cases. The methods are mainly classified as parameterization-based methods, outer-approximation methods, kinds of branch-and-bound methods, decomposition method, and cutting plane methods in Konno et al. [17], in Thoai [18], in Shen and Jiao [19, 20], in Wang et al. [21], in Jiao et al. [22], and in Shen et al. [23]. Most of these methods for linear multiplicative programming are designed only for problems in which the constraint functions are all linear or under the assumption that all linear functions that appeared in the multiplicative terms are nonnegative. And most branch-and-bound algorithms for this problem are developed based on two-phase linear relaxation scheme, which will take more computational time in the approximation process. For example, in Zhao and Liu [24] and in Jiao et al. [22], both algorithms utilize two-phase relaxation methods, and a large amount of computational time is consumed in the relaxation process. Compared with these algorithms, the main features of our new algorithm are threefold. The problem investigated in this paper is linear multiplicative constrained multiplicative programming; it has a more general form than those linear multiplicative programming problems with linear constraints. The relaxation programming problem is a convex programming which can be obtained with one-step relaxation; this will greatly improve the efficiency of approximation process. The condition , is a nonnecessitating requirement in our algorithm. Extensive computational numerical examples and comparison from recent literature and GLOBALLib are performed to test our algorithm for LMP.

The rest of this paper is arranged in the following way. In Section 2, the construction process of the convex relaxation problem is detailed, which will provide a reliable lower bound for the optimal value of LMP. In Section 3, some key operations for designing a branch-and-bound algorithm and the global optimization algorithm for LMP are described. Convergence property of the algorithm is also established in this section. Numerical results are reported to show the feasibility and efficiency of our algorithm in Section 4 and some concluding remarks are reported in the last section.

2. Convex Relaxation of LMP

As is known to all, constructing a well-performed relaxation problem can bring great convenience for designing branch-and-bound algorithm of global optimization problems. In this section, we will show how to construct the convex relaxation programming problem (CRP) for LMP. For this, we first compute the initial variable bounds by solving the following linear programming problems: then an initial hyperrectangle can be obtained.

For convenience for introduction of relaxation convex programming problem for LMP over subrectangle , we further solve the following set of linear programming problems:Based on the construction process of and and and , it is not hard to see thatby combining (4) with (5) and performing some equivalent transformation, we can easily obtain a lower and an upper bound of each bilinear term; that is,To facilitate the narrative, , by denoting we can reformulate conclusion (6) asrespectively. With this, we obtain a lower bound function and upper bound function for , which satisfy , , where

So far, based on the above discussion, it is not too hard to obtain the relaxation programming problem over for the LMP which we formulated as follows:

Remark 1. As one can easily confirm, program (CRP) is a convex program which can be effectively solved by some convex optimization tool-boxes such as CVX; that is, all functions that appeared in the objective and constraints of CRP are convex.

Remark 2. Both the lower and upper bound functions and will approximate function , as the diameter of rectangle converges to zero.

3. Branch-and-Bound Algorithm and Its Convergence

As we have known, branch-and-bound algorithms utilize tree search strategies to implicitly enumerate all possible solutions to a given problem, applying pruning techniques to eliminate regions of the search space that cannot yield a better solution. There are three algorithmic components in branch-and-bound scheme which can be specified to fine-tune the performance of the algorithm. These components are the search strategy, the branching strategy, and the pruning rules. This section presents a description of these components and the proposed algorithm for solving LMP.

3.1. Key Operation

There are two important phases of any branch-and-bound algorithm: search phase and verification phase.

The choice of search strategy primarily impacts the search phase and has potentially significant consequences for the amount of computational time required and memory used. In this paper, we will choose the depth-first search (DFS) strategy with node ranking techniques, which can reduce a lot of storage space.

The choice of branching strategy determines how children nodes are generated from a subproblem. It has significant impacts on both the search phase and the verification phase. By branching appropriately at subproblems, the strategy can guide the algorithm towards optimal solutions. In this paper, to develop the proposed algorithm for LMP, we adopt a standard range bisection approach, which is adequate to insure global convergence of the proposed algorithm. Detailed process is described as follows.

For any region , let and , and then the current region can be divided into the two following subregions:

Another critical aspect of branch-and-bound algorithm is the choice of pruning rules used to exclude regions of the search space from exploration. The most common way to prune is to produce a lower and (or) upper bound on the objective function value at each subproblem and use this to prune subproblems whose bound is no better than the incumbents solution value. For each partition subset generated by the above branching operation, the bounding operation is mainly concentrated on estimating a lower bound and an upper bound for the optimal value of linear multiplicative programming problem (LMP). can be obtained by solving the following convex relaxation programming problems: Moreover, since any feasible solution of linear multiplicative programming problem (LMP) will provide a valid upper bound of the optimal value, we can evaluate the initial objective value of the optimal solution of CRP to determine an upper bound (if possible) for the optimal value of linear multiplicative programming problem (LMP) over .

3.2. Branch-and-Bound Algorithm

Based on the former discussion, the presented algorithm for globally solving the LMP can be summarized as follows.

Step 0 (initialization).
Step 0.1. Set iteration counter and the initial partition set as The set of active nodes ; the upper bound ; the set of feasible points ; the convergence tolerance .
Step 0.2. Solve the initial convex relaxation problem (CRP) over region ; if the CRP is not feasible, then there is no feasible solution for the initial problem. Otherwise, denote the optimal value and solution as and , respectively. If is feasible to the LMP, we can obtain an initial upper bound and lower bound of the optimal value for linear multiplicative programming problem (LMP); that is, , and . And then, if , the algorithm can stop, and is the optimal solution of the LMP; otherwise proceed to Step 1.

Step 1 (branching). Partition into two new subrectangles according to the partition rule described in Section 3.1. Delete and add the new nodes into the active nodes set ; still denote the set of new partitioned sets as .

Step 2 (bounding). For each subregion still of interest , obtain the optimal solution and value for convex relaxation problem (CRP) by solving the convex relaxation programming problem over ; if , delete from . Otherwise, we can update the lower and upper bounds: and

Step 3 (termination). If , the algorithm can be stopped; is the global optimal value for LMP. Otherwise, set , and select the node with the smallest optimal value as the current active node, and return to Step 1.

3.3. Global Convergence of the Algorithm

The global convergence properties of the above algorithm for solving linear multiplicative programming problem (LMP) can be given in the sense of the following theorem.

Theorem 3. The proposed algorithm will terminate within finite iterations or it will generate an infinite sequence , any accumulation point of which is a global optimal solution for the LMP.

Proof. If the proposed algorithm terminates finitely, assume that it terminates at the iteration: By the termination criteria, we know that Based on the upper bounding technique described in Step 2, it is implied that Let be the optimal value of linear multiplicative programming problem (LMP); then, by Section 3.1 and Step 2 above, we know that Hence, taking (14) and (15) together, it is implied that and thus the proof of the first part is completed.
If the algorithm is infinite, then it generates an infinite feasible solution sequence for the LMP via solving the CRP. Since the feasible region of LMP is compact, the sequence must have a convergent subsequence. For definiteness and without loss of generality, assume , and then we have By the definitions of and and and , we know that Moreover, we haveTherefore, we have This implies that is a global optimal solution for linear multiplicative programming problem (LMP).

4. Numerical Experiments

To test the proposed algorithm in efficiency and solution quality, we performed some computational examples on a personal computer containing an Intel Core i5 processor of 2.40 GHz and 4 GB of RAM. The code base is written in Matlab 2014a and all subproblems are solved by using CVX.

We consider some instances of linear multiplicative programming problem (LMP) from some recent literature in Wang and Liang [25], in Jiao [26], in Shen et al. [23], in Shen and Jiao [19, 20], and in Jiao et al. [22] and GLOBALLib at http://www.gamsworld.org/global/globallib.htm. Among them, Examples 16 are taken from some recent literature. Examples 711 are taken from GLOBALLib, a collection of nonlinear programming models. The last example is a nonlinear nonconvex mathematical programming problem with randomized linear multiplicative objective and constraint functions.

Example 1 (references in Shen et al. [23]).

Example 2 (references in Wang and Liang [25] and in Jiao [26]).

Example 3 (references in Jiao [26]).

Example 4 (references in Shen and Jiao [19, 20] and in Jiao et al. [22]).

Example 5 (references in Shen and Jiao [19, 20] and in Jiao et al. [22]).

Example 6 (references in Jiao et al. [22]). Examples 16 are taken from some literature, where they are solved by branch-and-bound algorithm with linear relaxation techniques. Numerical experiment demonstrates that the above method is more efficient than other methods in the sense that our algorithm requires rather less iterations and CPU time for solving the same problems. Specific results of numerical Examples 16 are listed in Table 1, where the notations used in the headline have the following meanings: Exam.: serial number of numerical examples in this paper; Ref.: serial number of numerical examples in the references; Iter.: iterative times; Time: CPU time in seconds; Prec.: precision used in the algorithm; Opt. val. and Opt. sol. denote the optimal value and solution of the problem, respectively.

Example 7 (st-qpk1).

Example 8 (st-z).

Example 9 (ex5-4-2).

Example 10 (st-qpc-m1).

Example 11 (st-e26). These five examples are taken from GLOBALLib; all of them are generalized linear multiplicative programs with different types of constraints. Computational results and some known results are listed in Table 2, where the notations used in the headline have the following meanings: Exam. denotes the serial number of the example tested in this paper; Best sol. and Best val. represent the best optimal solution and optimal value currently known; Our sol. and Our val. are the optimal solution and optimal value obtained by our algorithm described in this paper. From the results summarised in the table, we can see that our algorithm can effectively solve the LMP.
One has

Example 12 (random test). where the real numbers and are randomly generated in the range , is randomly generated in interval , are randomly generated in the range , and the real elements of , , , and are randomly generated in the range .

For this problem, we tested 8 groups of instances with different dimension. For each group, we performed 10 instances for a total of 80 instances. The computational results are listed in Table 3, where the notations used in the headline have the following meanings: Avr. iter.: average numbers of iterations in the algorithm; Std. dev.: standard deviation; Avr. time: average CPU time in seconds; and denote the numbers of linear constraints and variables, respectively.

5. Concluding Remarks

This paper presents a new relaxation method for designing branch-and-bound algorithm for generalized linear multiplicative programming problem with linear multiplicative constraints. The relaxation problem is a convex programming which can be easily obtained with one-step relaxation; it has better approximation effect than usual two-phase linear relaxation method. The presented algorithm can efficiently work without nonnegative restriction to linear function in multiplicative terms, while this restriction is a necessary condition to most branch and bound algorithms described in a lot of literatures. Extensive results of numerical experiments from recent literature show that our method is feasible and effective for this kind of problems.

Conflicts of Interest

The authors declare that there are no conflicts of interest.

Acknowledgments

This paper is supported by the Science and Technology Key Project of Education Department of Henan Province (14A110024 and 15A110023), the National Natural Science Foundation of Henan Province (152300410097), the Science and Technology Projects of Henan Province (182102310941), the Cultivation Plan of Young Key Teachers in Colleges and Universities of Henan Province (2016GGJS-107), the Higher School Key Scientific Research Projects of Henan Province (18A110019 and 17A110021), and the Major Scientific Research Projects of Henan Institute of Science and Technology (2015ZD07).