#### Abstract

A new two-part parametric linearization technique is proposed globally to a class of
nonconvex programming problems (NPP). Firstly, a two-part parametric linearization method
is adopted to construct the underestimator of objective and constraint functions, by utilizing
a transformation and a parametric linear upper bounding function (LUBF) and a linear lower
bounding function (LLBF) of a natural logarithm function and an exponential function with *e*
as the base, respectively. Then, a sequence of relaxation lower linear programming problems, which are
embedded in a branch-and-bound algorithm, are derived in an initial nonconvex programming
problem. The proposed algorithm is converged to global optimal solution by means of a
subsequent solution to a series of linear programming problems. Finally, some examples are
given to illustrate the feasibility of the presented algorithm.

#### 1. Introduction

In this paper, we consider a class of nonconvex programming problems as follows: where and are real numbers, is matrix, , and are finite. for all . (NPP) contains various variants such as a sum or product of a finite number of ratios in linear functions, generalized linear multiplicative programs, general polynomial programming, quadratic programming, and generalized geometric programming. So, (NPP) with its special form has attracted considerable attention to the literature because of its large number of practical applications in various fields of study, including transaction cost [1], financial optimization [2], robust optimization [3], VLISI chip design [4], data mining/pattern recognition [5], queueing-location problems [6, 7], bond portfolio optimization [8, 9], and elastic-plastic finite element analysis of metal forming processes [10]. From a researching point of view, (NPP) poses significant theoretical and computational challenges. It follows that it possesses multiple local optima that are not globally optimal. Recently, Jiao [11] and Shen et al. [12] have proposed a branch-and-bound algorithm globally to a class of nonconvex programming problems (NPP). By utilizing tangential hypersurfaces, convex envelope approximations of exponential function, and concave envelope approximations of logarithmic function, a two-stage linear relaxation technique was given. Then, the relaxation linear programming of original problem can be constructed with a branch-and-bound algorithm proposed for globally solving (NPP).

For all , if , (NPP) can be reduced to the linear multiplicative programming (LMP) [15, 16]. When for any , (NPP) is called multiplicative programming problems with exponent (MPE) [17, 18]; by utilizing logarithmic property, one can obtain an equivalent problem of (MPE), and a linear relaxation of equivalent problem is received by tangential hypersurfaces and concave envelope approximations. Then, a new branch-and-bound algorithm is given via solving a sequence of linear relaxations over partitioned subsets in order to find a global optimal solution to problem (MPE). If, for all ,āā and , the problem is called generalized linear multiplicative programs (GLMP) [19]. A greedy branching rule for rectangular branch-and-bound algorithms is proposed for solving problem (GLMP).

Assume that for all , , and, without loss of generality, let and ; (NPP) can be reduced to a linear sum-of-ratios fractional program. It is a global optimization problem; that is, it is known to generally possess multiple local optima that are not globally optimal [20]. Furthermore, it is NP-hard [21], and the objective function is neither quasiconvex nor quasiconcave. A number of algorithms have been proposed for globally solving a linear sum-of-ratios fractional program. They can be classified as follows: parametric simplex methods [22, 23], outer approximation methods [24, 25], the branch-and-bound approaches [13, 26ā29], a duality-bounds method [30], an iteratively searching method [31], and so forth. Readers can find the applications, theory, and algorithms of the sum-of-ratios fractional programming in [32]. If there exist some and , (NPP) is called generalized linear fractional programming problems. Shen and Wang [14] used a transformation and a two-part linearization technique to systematically convert the generalized linear fractional program into a series of linear programming problems.

When for all , , and , (NPP) can be reduced to the general polynomial programming problem earlier investigated in [33ā35]. Most recently, Lasserre [36, 37] developed a class of positive semidefinite relaxations for polynomial programming with the property that any polynomial program can be approximated as closely as desired by semidefinite program of this class.

In this paper, a new global optimization method is presented to (NPP) by solving a sequence of linear programming problems over partitioned subsets. By using a transformation and a two-part parametric linearization technique, we can systematically convert (NPP) into a series of linear programming problems. The solutions to these converted problems can be sufficiently closed to the global optimum of (NPP) by a successive refinement process. Some examples show that the proposed method can achieve all of the test problems in finding globally optimal solutions within a prespecified tolerance.

The organization and content of this paper can be summarized as follows. In Section 2, we first discuss parametric linear estimation of the natural logarithm function and the exponential function with as the base, respectively. Then, two-part parametric linearization method is presented for generating the relaxation lower linear programming of (NPP). In Section 3, the proposed branch-and-bound algorithm in which the relaxed subproblems are embedded is described, and the convergence of the algorithm is established. Some numerical results are reported in Section 4. Finally, concluding remarks are given in Section 5.

#### 2. Parametric Linear Relaxation of (NPP)

Now, we derive an equivalent form of the function by transformation. First, for any , since , we assume that

Then, for all , the function can be rewritten as where

In order to construct underestimator of function for all , we adopt two-part parametric linearization method. We will firstly derive a linear upper bounding function (LUBF) and a linear lower bounding function (LLBF) of about the variable , respectively. Then, in the second part, an LUBF about primal variable will be constructed ultimately.

##### 2.1. Parametric Linear Estimation of Logarithm and Exponential Functions

We first construct parametric linear overestimation and underestimation of a natural logarithm function and an exponential function with as the base in interval vector , respectively.

Let , , where and are called the lower bound and upper bound, respectively. For any , we denote where is an -dimensional vector with components equal to 0 or 1. For convenience, we denote by the vector with all components equal to 0 and by the vector with all components equal to 1. Then, we have and . The following theorem illustrates how to construct the lower and upper bound linear functions of natural logarithm function and the exponential function with as the base, respectively.

Theorem 1. *For any interval vector , , one assumes that the vertices of are , in form of (6). Let or and its gradient function over . Then there exist vectors such that the linear functions
**
satisfy, for all , the inequalities
**
and moreover
**
where , in form of (6), are vertices of the interval vector , and the functions , show that , have the argument and depend on the two parameters and .*

*Proof. *For function , this result is shown in [38], and for , the proof is similar. However, to provide a self-contained presentation, and because this result is central to this paper, we provide a direct proof for natural logarithm function.

By and it follows that there exist vectors and satisfying
where, for ,

By the mean value theorem, we have, for all ,
where for some . Then, (6) and (10) imply that, for , the inequalities
hold, where denotes the th component of . And for the inequalities
are valid.

Consequently, it follows from the mean value theorem that

So, , and .

Similarly, we can prove that

Now, we show how to construct a two-part parametric linearization method to systematically convert (NPP) into a series of linear programming problems by utilizing a transformation and a parametric linear upper bounding function (LUBF) and a linear lower bounding function (LLBF) of a natural logarithm function and an exponential function with as the base, respectively.

##### 2.2. First-Part Parametric Linear Relaxation

In this subsection, we discuss how to obtain the first-stage relaxation LLBF of about the variable by using Theorem 1.

Let denote either the initial rectangle or some subrectangle of that is generated by the proposed algorithm. Without loss of generality, let . Denote the lower bound and the upper bound of by and which can be derived on the presently considered rectangle in the algorithm. For any , , fix a vector , and for function and interval vector , calculate interval vector satisfying inequalities of Theorem 1 in [38], where ,āā. That is, for any , and any , we calculate the following formulas:

Thus, the vertices of the interval vectors refer to respectively, where denotes the th unit vector. Therefore, by Theorem 1, we can derive parametric linear lower bound functions of with respect to as follows:

Let . So, from (5), if , the first-part LLBF of denoted by about can get where denotes the th component of . And if ,

##### 2.3. Second-Part Parametric Linear Relaxation

Now, by Theorem 1, we construct the second-part LLBF of about the variable . For any interval vector and any , let . For convenience, the following notations and functions of this paper are introduced: where denotes the th unit vector in . Then, by Theorem 1, for any vector , we define the LLBF of by below:

Then, if , we can construct the LLBF of as follows:

And, for , we can get the LLBF of as

Taken together, the LLBF of function with respect to can be obtained as

Obviously, for all , .

##### 2.4. Approximation Relaxation Linear Programming

Consequently, the approximation relaxation lower linear programming (LLP) of problem (NPP) with the parametric vector in interval vector is easily obtained like the following:

Based on the linear underestimators, every feasible point of (NPP) is feasible in (LLP), and the objective of (LLP) is smaller than or equal to that of (NPP) for all points in . Thus, (LLP) provides a valid lower bound for the solution of (NPP) over the partition set . It should be noted that problem (LLP) contains only the necessary constraints to guarantee convergence of the algorithm. The following results are key to the convergence of the proposed algorithm.

Lemma 2. *For all , and , let
**
Then one has .*

* Proof. *From Theorem 1 and definition of function , for any , it follows that
where is a gradient function of ,āā for some , and are vertices of the interval vectors and , respectively. By (6) and proof of Theorem 1, the right-hand side in inequality (29) satisfies for arbitrarily fixed
where for some . It shows that

Similarly, we can prove that .

Similarly, we have Lemma 3 (also see Lemma 1 in [38]).

Lemma 3. *For all , let
**
Then .*

Theorem 4. *For any , let . Then, when , for any , the difference of and satisfies .*

*Proof. *Firstly, notice that when . Then, for any , and for any , let
and let and . Therefore, we only need to prove as .

We first prove . Since
it is obvious that we only need to prove . We first consider the difference . By the definition of , it follows that
where . Then, by Lemma 2, as .

Now, the difference is considered. From the definition of , , we can obtain

Then, by Lemma 3, as . Therefore, when , we can get

By similar discussion as above, we can get

It follows from (37) and (38) that when .

Theorem 4 shows that as the subhyperrectangle is small enough, the solution to (LLP)() is sufficiently approaching the solution of (NPP)() and this guarantees the global convergence of the method.

#### 3. Algorithm and Its Convergence

In this section, a branch-and-bound algorithm is developed to solve (NPP) based on the relaxation lower linear programming in Section 2. This algorithm needs to solve a sequence of linear programming over partitioned subsets of in order to find a global optimum. Consequently, this method needs partitioning the set into subhyperrectangles, each concerned with a node of the branch-and-bound tree, and each node is associated with a relaxation linear subproblem in each subhyperrectangle.

First, at any stage of the algorithm, suppose that we have a collection of active nodes denoted by , say, each associated with a subhyperrectangle . For each node , we will have computed a lower bound of the optimal value of the problem ((NPP)()) via solution of problem (LLP) so that the lower bound of optimal value of (NPP) on the whole initial box region is given by at stage . Whenever the lower bounding solution to any node subproblem; that is, the solution to the relaxation linear programming (LLP), turns out to be feasible to (NPP), we update the upper bound of incumbent solution if necessary. Then, the active nodes collection will satisfy , for each stage . We now select an active node such that for further considering. The active node is partitioned into two subhyperrectangles according to the following branching rules. For these two subhyperrectangles, the fathoming step is applied in order to identify whether the subhyperrectangles should be eliminated. Finally, we obtain a collection of active nodes for the next stage, and this process is repeated until convergence is obtained.

##### 3.1. Branching Rule

The critical element in guaranteeing convergence to a global minimum means the choice of a suitable partitioning strategy. In our paper, we choose a simple and standard bisection rule. This method is sufficient to ensure convergence since it drives all the intervals to zero for the variables that are associated with the term yielding the greatest discrepancy in the employed approximation along with any infinite branch of a branch-and-bound tree.

Consider any node subproblem identified by the hyperrectangle and the selection of branching variable and partitioning of is then done by using the following rule (see also [39, 40]). Let ,āāpartitioning by bisectioning the interval into the subintervals and .

##### 3.2. Algorithmic Statement

The deterministic global optimization algorithm is summarized as follows.

*Step*āā0āā*(initialization).*(0.1)Initialize the iteration counter , the set of all active nodes , the upper bound , and the set of feasible points .(0.2)Solve (LLP) with in order to find an optimal solution and the optimal value . If is feasible to (NPP), then set , and , if necessary.(0.3)If , where is some accuracy tolerance, then stop. is global -optimal solution to (NPP). Otherwise, set and proceed to Step 1.

*Step 1 ( partitioning step). *According to the rectangle bisection rule, select a branching variable to partition to get two new subhyperrectangles . Call the set of new partition rectangles as .

*Step 2 ( feasibility check for (NPP) in subhyperrectangles). *For each new node , for each , compute the lower bound for any linear constraint function only according to the present considered rectangle; that is, compute lower bound . If there exists some such that
then the corresponding subrectangle is eliminated from ; that is, , and skip to next element of .

*Step 3 ( bounding step). *If , go to Step 5. If , solve LLP(

*X*) to obtain and for each . If , set . Otherwise, if , is feasible to (NPP), then update and , if necessary.

*Step 4 ( updating the upper bound). *Select the midpoint of ; if is feasible to (NP)(), then . Define the upper bound . If , the best known feasible point is denoted by .

*Step 5 ( updating the lower bound). *The partition set remaining is now and a new lower bound is .

*Step 6 ( convergence checking). *Set . If , then stop with as the solution of (NPP) and as an optimal solution. Otherwise select an active node such that . Set and go to Step 1.

##### 3.3. Convergence of the Algorithm

By Theorem 4, global algorithm convergence will be given in Theorem 5.

Theorem 5. *The above algorithm either terminates finitely with the incumbent solution being optimal to (NPP) or generates an infinite sequence of iterations such that, along with any infinite branch of the branch-and-bound tree, any accumulation point of sequence will be global solution to (NPP).*

*Proof. *If the above proposed algorithm terminates finitely, obviously is a global optimal value and is optimal solution for the (NPP). If the algorithm is infinite, it generates at least one infinite sequence such that for any . Then, from [39, 40], for some point . For every iteration of the algorithm, the following results are true:

Since is contained in a compact set , there must be one convergent subsequence and assume . Then from the proposed algorithm, there exists a decreasing subsequence where with , and . According to Theorem 5, we have .

Then all what remains is to prove that is feasible to (NPP)(). First, it is obvious that since is closed. Secondly, by the algorithm, we can obtain that, for all , is feasible solution to (NPP); that is, . Taking limits over in this inequality yields . The remainder of the proof will be by contradiction. Assume that for some . Because function is continuous and again from Theorem 4, the sequence converges to ; then by definition of convergence, there must be , such that for any . Therefore, for any , we have , which implies that LLP() is infeasible and violating the assumption that . This is a contradiction, and thus the theorem is completed.

#### 4. Numerical Experiments

To verify performance of the proposed global optimization algorithm, some test problems were implemented. The test problems are coded in C++ and the experiments are conducted on a Pentium IV (3.06āGHZ) microcomputer. Set . The results of Examples 1ā5 are summarized in Table 1. In Table 1, the notations have been used for row headers: Iter.: number of algorithm iterations; : the maximal length of the enumeration tree.

*Example 1 ( see [13]). *Consider

*Example 2 ( see [13]). *Consider

*Example 3 ( see [14]). *Consider

*Example 4 ( see [11, 12]). *Consider

*Example 5 ( see [11, 12]). *Consider

*Example 6. *In this example, we solve 10 different random instances:
where , , is matrix, and all elements of , , and are randomly generated, whose ranges are . Table 2 summarizes our computational results. In Table 2, the following indices characterize performance in algorithm: (): the dimensions of the matrix ; Iter.: the average number of iterations; time: the average execution time in seconds.

#### 5. Conclusion

In this paper, a global optimization algorithm is presented to a class of nonconvex programming problems (NPP). A transformation and a two-part parametric linearization technique are employed to initial (NPP), and (NPP) is reduced to a parametric relaxation in lower linear programming based on the linear lower bounding of the objective function and nonlinear constraint functions. Thus the initial (NPP) is reduced to a sequence of linear programming problems through the successive refinement in a linear relaxation of feasible region in an objective function. The algorithm can obtain finite convergence to the global minimum through the successive refinement of the feasible region and the subsequent solutions to a series of linear programming problems. The proposed algorithm is applied to several test problems. In all cases, convergence to the global minimum is achieved.

#### Conflict of Interests

The authors declare that there is no conflict of interests regarding the publication of this paper.

#### Acknowledgments

The authors are grateful to the responsible editor and the anonymous referees for their valuable comments and suggestions, which have improved the earlier version of this paper. Thanks are due to the support by the Ph.D. Start-up Fund of Natural Science Foundation of Guangdong Province, China (no. S2013040012506), and Project Science Foundation of Guangdong University of Finance (no. 2012RCYJ005).