Research Article  Open Access
A New Global Optimization Algorithm for Solving a Class of Nonconvex Programming Problems
Abstract
A new twopart parametric linearization technique is proposed globally to a class of nonconvex programming problems (NPP). Firstly, a twopart parametric linearization method is adopted to construct the underestimator of objective and constraint functions, by utilizing a transformation and a parametric linear upper bounding function (LUBF) and a linear lower bounding function (LLBF) of a natural logarithm function and an exponential function with e as the base, respectively. Then, a sequence of relaxation lower linear programming problems, which are embedded in a branchandbound algorithm, are derived in an initial nonconvex programming problem. The proposed algorithm is converged to global optimal solution by means of a subsequent solution to a series of linear programming problems. Finally, some examples are given to illustrate the feasibility of the presented algorithm.
1. Introduction
In this paper, we consider a class of nonconvex programming problems as follows: where and are real numbers, is matrix, , and are finite. for all . (NPP) contains various variants such as a sum or product of a finite number of ratios in linear functions, generalized linear multiplicative programs, general polynomial programming, quadratic programming, and generalized geometric programming. So, (NPP) with its special form has attracted considerable attention to the literature because of its large number of practical applications in various fields of study, including transaction cost [1], financial optimization [2], robust optimization [3], VLISI chip design [4], data mining/pattern recognition [5], queueinglocation problems [6, 7], bond portfolio optimization [8, 9], and elasticplastic finite element analysis of metal forming processes [10]. From a researching point of view, (NPP) poses significant theoretical and computational challenges. It follows that it possesses multiple local optima that are not globally optimal. Recently, Jiao [11] and Shen et al. [12] have proposed a branchandbound algorithm globally to a class of nonconvex programming problems (NPP). By utilizing tangential hypersurfaces, convex envelope approximations of exponential function, and concave envelope approximations of logarithmic function, a twostage linear relaxation technique was given. Then, the relaxation linear programming of original problem can be constructed with a branchandbound algorithm proposed for globally solving (NPP).
For all , if , (NPP) can be reduced to the linear multiplicative programming (LMP) [15, 16]. When for any , (NPP) is called multiplicative programming problems with exponent (MPE) [17, 18]; by utilizing logarithmic property, one can obtain an equivalent problem of (MPE), and a linear relaxation of equivalent problem is received by tangential hypersurfaces and concave envelope approximations. Then, a new branchandbound algorithm is given via solving a sequence of linear relaxations over partitioned subsets in order to find a global optimal solution to problem (MPE). If, for all , and , the problem is called generalized linear multiplicative programs (GLMP) [19]. A greedy branching rule for rectangular branchandbound algorithms is proposed for solving problem (GLMP).
Assume that for all , , and, without loss of generality, let and ; (NPP) can be reduced to a linear sumofratios fractional program. It is a global optimization problem; that is, it is known to generally possess multiple local optima that are not globally optimal [20]. Furthermore, it is NPhard [21], and the objective function is neither quasiconvex nor quasiconcave. A number of algorithms have been proposed for globally solving a linear sumofratios fractional program. They can be classified as follows: parametric simplex methods [22, 23], outer approximation methods [24, 25], the branchandbound approaches [13, 26–29], a dualitybounds method [30], an iteratively searching method [31], and so forth. Readers can find the applications, theory, and algorithms of the sumofratios fractional programming in [32]. If there exist some and , (NPP) is called generalized linear fractional programming problems. Shen and Wang [14] used a transformation and a twopart linearization technique to systematically convert the generalized linear fractional program into a series of linear programming problems.
When for all , , and , (NPP) can be reduced to the general polynomial programming problem earlier investigated in [33–35]. Most recently, Lasserre [36, 37] developed a class of positive semidefinite relaxations for polynomial programming with the property that any polynomial program can be approximated as closely as desired by semidefinite program of this class.
In this paper, a new global optimization method is presented to (NPP) by solving a sequence of linear programming problems over partitioned subsets. By using a transformation and a twopart parametric linearization technique, we can systematically convert (NPP) into a series of linear programming problems. The solutions to these converted problems can be sufficiently closed to the global optimum of (NPP) by a successive refinement process. Some examples show that the proposed method can achieve all of the test problems in finding globally optimal solutions within a prespecified tolerance.
The organization and content of this paper can be summarized as follows. In Section 2, we first discuss parametric linear estimation of the natural logarithm function and the exponential function with as the base, respectively. Then, twopart parametric linearization method is presented for generating the relaxation lower linear programming of (NPP). In Section 3, the proposed branchandbound algorithm in which the relaxed subproblems are embedded is described, and the convergence of the algorithm is established. Some numerical results are reported in Section 4. Finally, concluding remarks are given in Section 5.
2. Parametric Linear Relaxation of (NPP)
Now, we derive an equivalent form of the function by transformation. First, for any , since , we assume that
Then, for all , the function can be rewritten as where
In order to construct underestimator of function for all , we adopt twopart parametric linearization method. We will firstly derive a linear upper bounding function (LUBF) and a linear lower bounding function (LLBF) of about the variable , respectively. Then, in the second part, an LUBF about primal variable will be constructed ultimately.
2.1. Parametric Linear Estimation of Logarithm and Exponential Functions
We first construct parametric linear overestimation and underestimation of a natural logarithm function and an exponential function with as the base in interval vector , respectively.
Let , , where and are called the lower bound and upper bound, respectively. For any , we denote where is an dimensional vector with components equal to 0 or 1. For convenience, we denote by the vector with all components equal to 0 and by the vector with all components equal to 1. Then, we have and . The following theorem illustrates how to construct the lower and upper bound linear functions of natural logarithm function and the exponential function with as the base, respectively.
Theorem 1. For any interval vector , , one assumes that the vertices of are , in form of (6). Let or and its gradient function over . Then there exist vectors such that the linear functions satisfy, for all , the inequalities and moreover where , in form of (6), are vertices of the interval vector , and the functions , show that , have the argument and depend on the two parameters and .
Proof. For function , this result is shown in [38], and for , the proof is similar. However, to provide a selfcontained presentation, and because this result is central to this paper, we provide a direct proof for natural logarithm function.
By and it follows that there exist vectors and satisfying
where, for ,
By the mean value theorem, we have, for all ,
where for some . Then, (6) and (10) imply that, for , the inequalities
hold, where denotes the th component of . And for the inequalities
are valid.
Consequently, it follows from the mean value theorem that
So, , and .
Similarly, we can prove that
Now, we show how to construct a twopart parametric linearization method to systematically convert (NPP) into a series of linear programming problems by utilizing a transformation and a parametric linear upper bounding function (LUBF) and a linear lower bounding function (LLBF) of a natural logarithm function and an exponential function with as the base, respectively.
2.2. FirstPart Parametric Linear Relaxation
In this subsection, we discuss how to obtain the firststage relaxation LLBF of about the variable by using Theorem 1.
Let denote either the initial rectangle or some subrectangle of that is generated by the proposed algorithm. Without loss of generality, let . Denote the lower bound and the upper bound of by and which can be derived on the presently considered rectangle in the algorithm. For any , , fix a vector , and for function and interval vector , calculate interval vector satisfying inequalities of Theorem 1 in [38], where , . That is, for any , and any , we calculate the following formulas:
Thus, the vertices of the interval vectors refer to respectively, where denotes the th unit vector. Therefore, by Theorem 1, we can derive parametric linear lower bound functions of with respect to as follows:
Let . So, from (5), if , the firstpart LLBF of denoted by about can get where denotes the th component of . And if ,
2.3. SecondPart Parametric Linear Relaxation
Now, by Theorem 1, we construct the secondpart LLBF of about the variable . For any interval vector and any , let . For convenience, the following notations and functions of this paper are introduced: where denotes the th unit vector in . Then, by Theorem 1, for any vector , we define the LLBF of by below:
Then, if , we can construct the LLBF of as follows:
And, for , we can get the LLBF of as
Taken together, the LLBF of function with respect to can be obtained as
Obviously, for all , .
2.4. Approximation Relaxation Linear Programming
Consequently, the approximation relaxation lower linear programming (LLP) of problem (NPP) with the parametric vector in interval vector is easily obtained like the following:
Based on the linear underestimators, every feasible point of (NPP) is feasible in (LLP), and the objective of (LLP) is smaller than or equal to that of (NPP) for all points in . Thus, (LLP) provides a valid lower bound for the solution of (NPP) over the partition set . It should be noted that problem (LLP) contains only the necessary constraints to guarantee convergence of the algorithm. The following results are key to the convergence of the proposed algorithm.
Lemma 2. For all , and , let
Then one has .
Proof. From Theorem 1 and definition of function , for any , it follows that
where is a gradient function of , for some , and are vertices of the interval vectors and , respectively. By (6) and proof of Theorem 1, the righthand side in inequality (29) satisfies for arbitrarily fixed
where for some . It shows that
Similarly, we can prove that .
Similarly, we have Lemma 3 (also see Lemma 1 in [38]).
Lemma 3. For all , let
Then .
Theorem 4. For any , let . Then, when , for any , the difference of and satisfies .
Proof. Firstly, notice that when . Then, for any , and for any , let
and let and . Therefore, we only need to prove as .
We first prove . Since
it is obvious that we only need to prove . We first consider the difference . By the definition of , it follows that
where . Then, by Lemma 2, as .
Now, the difference is considered. From the definition of , , we can obtain
Then, by Lemma 3, as . Therefore, when , we can get
By similar discussion as above, we can get
It follows from (37) and (38) that when .
Theorem 4 shows that as the subhyperrectangle is small enough, the solution to (LLP)() is sufficiently approaching the solution of (NPP)() and this guarantees the global convergence of the method.
3. Algorithm and Its Convergence
In this section, a branchandbound algorithm is developed to solve (NPP) based on the relaxation lower linear programming in Section 2. This algorithm needs to solve a sequence of linear programming over partitioned subsets of in order to find a global optimum. Consequently, this method needs partitioning the set into subhyperrectangles, each concerned with a node of the branchandbound tree, and each node is associated with a relaxation linear subproblem in each subhyperrectangle.
First, at any stage of the algorithm, suppose that we have a collection of active nodes denoted by , say, each associated with a subhyperrectangle . For each node , we will have computed a lower bound of the optimal value of the problem ((NPP)()) via solution of problem (LLP) so that the lower bound of optimal value of (NPP) on the whole initial box region is given by at stage . Whenever the lower bounding solution to any node subproblem; that is, the solution to the relaxation linear programming (LLP), turns out to be feasible to (NPP), we update the upper bound of incumbent solution if necessary. Then, the active nodes collection will satisfy , for each stage . We now select an active node such that for further considering. The active node is partitioned into two subhyperrectangles according to the following branching rules. For these two subhyperrectangles, the fathoming step is applied in order to identify whether the subhyperrectangles should be eliminated. Finally, we obtain a collection of active nodes for the next stage, and this process is repeated until convergence is obtained.
3.1. Branching Rule
The critical element in guaranteeing convergence to a global minimum means the choice of a suitable partitioning strategy. In our paper, we choose a simple and standard bisection rule. This method is sufficient to ensure convergence since it drives all the intervals to zero for the variables that are associated with the term yielding the greatest discrepancy in the employed approximation along with any infinite branch of a branchandbound tree.
Consider any node subproblem identified by the hyperrectangle and the selection of branching variable and partitioning of is then done by using the following rule (see also [39, 40]). Let , partitioning by bisectioning the interval into the subintervals and .
3.2. Algorithmic Statement
The deterministic global optimization algorithm is summarized as follows.
Step 0 (initialization).(0.1)Initialize the iteration counter , the set of all active nodes , the upper bound , and the set of feasible points .(0.2)Solve (LLP) with in order to find an optimal solution and the optimal value . If is feasible to (NPP), then set , and , if necessary.(0.3)If , where is some accuracy tolerance, then stop. is global optimal solution to (NPP). Otherwise, set and proceed to Step 1.
Step 1 (partitioning step). According to the rectangle bisection rule, select a branching variable to partition to get two new subhyperrectangles . Call the set of new partition rectangles as .
Step 2 (feasibility check for (NPP) in subhyperrectangles). For each new node , for each , compute the lower bound for any linear constraint function only according to the present considered rectangle; that is, compute lower bound . If there exists some such that then the corresponding subrectangle is eliminated from ; that is, , and skip to next element of .
Step 3 (bounding step). If , go to Step 5. If , solve LLP(X) to obtain and for each . If , set . Otherwise, if , is feasible to (NPP), then update and , if necessary.
Step 4 (updating the upper bound). Select the midpoint of ; if is feasible to (NP)(), then . Define the upper bound . If , the best known feasible point is denoted by .
Step 5 (updating the lower bound). The partition set remaining is now and a new lower bound is .
Step 6 (convergence checking). Set . If , then stop with as the solution of (NPP) and as an optimal solution. Otherwise select an active node such that . Set and go to Step 1.
3.3. Convergence of the Algorithm
By Theorem 4, global algorithm convergence will be given in Theorem 5.
Theorem 5. The above algorithm either terminates finitely with the incumbent solution being optimal to (NPP) or generates an infinite sequence of iterations such that, along with any infinite branch of the branchandbound tree, any accumulation point of sequence will be global solution to (NPP).
Proof. If the above proposed algorithm terminates finitely, obviously is a global optimal value and is optimal solution for the (NPP). If the algorithm is infinite, it generates at least one infinite sequence such that for any . Then, from [39, 40], for some point . For every iteration of the algorithm, the following results are true:
Since is contained in a compact set , there must be one convergent subsequence and assume . Then from the proposed algorithm, there exists a decreasing subsequence where with , and . According to Theorem 5, we have .
Then all what remains is to prove that is feasible to (NPP)(). First, it is obvious that since is closed. Secondly, by the algorithm, we can obtain that, for all , is feasible solution to (NPP); that is, . Taking limits over in this inequality yields . The remainder of the proof will be by contradiction. Assume that for some . Because function is continuous and again from Theorem 4, the sequence converges to ; then by definition of convergence, there must be , such that for any . Therefore, for any , we have , which implies that LLP() is infeasible and violating the assumption that . This is a contradiction, and thus the theorem is completed.
4. Numerical Experiments
To verify performance of the proposed global optimization algorithm, some test problems were implemented. The test problems are coded in C++ and the experiments are conducted on a Pentium IV (3.06 GHZ) microcomputer. Set . The results of Examples 1–5 are summarized in Table 1. In Table 1, the notations have been used for row headers: Iter.: number of algorithm iterations; : the maximal length of the enumeration tree.

Example 1 (see [13]). Consider
Example 2 (see [13]). Consider
Example 3 (see [14]). Consider
Example 4 (see [11, 12]). Consider
Example 5 (see [11, 12]). Consider
Example 6. In this example, we solve 10 different random instances: where , , is matrix, and all elements of , , and are randomly generated, whose ranges are . Table 2 summarizes our computational results. In Table 2, the following indices characterize performance in algorithm: (): the dimensions of the matrix ; Iter.: the average number of iterations; time: the average execution time in seconds.

5. Conclusion
In this paper, a global optimization algorithm is presented to a class of nonconvex programming problems (NPP). A transformation and a twopart parametric linearization technique are employed to initial (NPP), and (NPP) is reduced to a parametric relaxation in lower linear programming based on the linear lower bounding of the objective function and nonlinear constraint functions. Thus the initial (NPP) is reduced to a sequence of linear programming problems through the successive refinement in a linear relaxation of feasible region in an objective function. The algorithm can obtain finite convergence to the global minimum through the successive refinement of the feasible region and the subsequent solutions to a series of linear programming problems. The proposed algorithm is applied to several test problems. In all cases, convergence to the global minimum is achieved.
Conflict of Interests
The authors declare that there is no conflict of interests regarding the publication of this paper.
Acknowledgments
The authors are grateful to the responsible editor and the anonymous referees for their valuable comments and suggestions, which have improved the earlier version of this paper. Thanks are due to the support by the Ph.D. Startup Fund of Natural Science Foundation of Guangdong Province, China (no. S2013040012506), and Project Science Foundation of Guangdong University of Finance (no. 2012RCYJ005).
References
 S. Schaible and T. Ibaraki, “Fractional programming,” European Journal of Operational Research, vol. 12, no. 4, pp. 325–338, 1983. View at: Publisher Site  Google Scholar  Zentralblatt MATH  MathSciNet
 C. D. Maranas, I. P. Androulakis, C. A. Floudas, A. J. Berger, and J. M. Mulvey, “Solving longterm financial planning problems via global optimization,” Journal of Economic Dynamics & Control, vol. 21, no. 89, pp. 1405–1425, 1997. View at: Publisher Site  Google Scholar  Zentralblatt MATH  MathSciNet
 J. M. Mulvey, R. J. Vanderbei, and S. A. Zenios, “Robust optimization of largescale systems,” Operations Research, vol. 43, no. 2, pp. 264–281, 1995. View at: Publisher Site  Google Scholar  Zentralblatt MATH  MathSciNet
 M. C. Dorneich and N. V. Sahinidis, “Global optimization algorithms for chip design and compaction,” Engineering Optimization, vol. 25, no. 2, pp. 131–154, 1995. View at: Google Scholar
 K. P. Bennett and O. L. Mangasarian, “Bilinear separation of two sets in nspace,” Computational Optimization and Applications, vol. 2, no. 3, pp. 207–227, 1993. View at: Publisher Site  Google Scholar  MathSciNet
 Z. Drezner, S. Schaible, and D. SimchiLevi, “Queueinglocation problems on the plane,” Naval Research Logistics, vol. 37, no. 6, pp. 929–935, 1990. View at: Google Scholar  Zentralblatt MATH  MathSciNet
 S. Zhang, “Stochastic Queue Location Problems,” Tinbergen Institute Research Series 14, Econometric Institute, Erasmus University, Rotterdam, The Netherlands, 1991. View at: Google Scholar
 H. Konno and M. Inori, “Bond portfolio optimization by bilinear fractional programming,” Journal of the Operations Research Society of Japan, vol. 32, no. 2, pp. 143–158, 1989. View at: Google Scholar  Zentralblatt MATH  MathSciNet
 H. M. Markowitz, Portfolio Selection, Basil Blackwell, Oxford, UK, 2nd edition, 1991.
 H. W. Zhang, W. L. Xu, S. L. Di, and P. F. Thomson, “Quadratic programming method in numerical simulation of metal forming process,” Computer Methods in Applied Mechanics and Engineering, vol. 191, no. 4950, pp. 5555–5578, 2002. View at: Publisher Site  Google Scholar
 H. Jiao, “A branch and bound algorithm for globally solving a class of nonconvex programming problems,” Nonlinear Analysis. Theory, Methods & Applications, vol. 70, no. 2, pp. 1113–1123, 2009. View at: Publisher Site  Google Scholar  Zentralblatt MATH  MathSciNet
 P. Shen, X. Bai, and W. Li, “A new accelerating method for globally solving a class of nonconvex programming problems,” Nonlinear Analysis. Theory, Methods & Applications, vol. 71, no. 78, pp. 2866–2876, 2009. View at: Publisher Site  Google Scholar  Zentralblatt MATH  MathSciNet
 T. Kuno, “A revision of the trapezoidal branchandbound algorithm for linear sumofratios problems,” Journal of Global Optimization, vol. 33, no. 2, pp. 215–234, 2005. View at: Publisher Site  Google Scholar  MathSciNet
 P.P. Shen and C.F. Wang, “Global optimization for sum of generalized fractional functions,” Journal of Computational and Applied Mathematics, vol. 214, no. 1, pp. 1–12, 2008. View at: Publisher Site  Google Scholar  Zentralblatt MATH  MathSciNet
 H. Konno and T. Kuno, “Generalized linear multiplicative and fractional programming,” Annals of Operations Research, vol. 25, no. 14, pp. 147–161, 1990. View at: Publisher Site  Google Scholar  Zentralblatt MATH  MathSciNet
 H. Konno and T. Kuno, “Linear multiplicative programming,” Mathematical Programming, vol. 56, no. 1, pp. 51–64, 1992. View at: Publisher Site  Google Scholar  Zentralblatt MATH  MathSciNet
 P. Shen and H. Jiao, “Linearization method for a class of multiplicative programming with exponent,” Applied Mathematics and Computation, vol. 183, no. 1, pp. 328–336, 2006. View at: Publisher Site  Google Scholar  Zentralblatt MATH  MathSciNet
 X.G. Zhou and K. Wu, “A method of acceleration for a class of multiplicative programming problems with exponent,” Journal of Computational and Applied Mathematics, vol. 223, no. 2, pp. 975–982, 2009. View at: Publisher Site  Google Scholar  Zentralblatt MATH  MathSciNet
 H.S. Ryoo and N. V. Sahinidis, “Global optimization of multiplicative programs,” Journal of Global Optimization, vol. 26, no. 4, pp. 387–418, 2003. View at: Publisher Site  Google Scholar  Zentralblatt MATH  MathSciNet
 S. Schaible, “A note on the sum of a linear and linearfractional function,” Naval Research Logistics Quarterly, vol. 24, pp. 691–693, 1977. View at: Google Scholar
 T. Matsui, “NPhardness of linear multiplicative programming and related problems,” Journal of Global Optimization, vol. 9, no. 2, pp. 113–119, 1996. View at: Publisher Site  Google Scholar  Zentralblatt MATH  MathSciNet
 H. Konno, Y. Yajima, and T. Matsui, “Parametric simplex algorithms for solving a special class of nonconvex minimization problems,” Journal of Global Optimization, vol. 1, no. 1, pp. 65–81, 1991. View at: Publisher Site  Google Scholar  Zentralblatt MATH  MathSciNet
 A. Cambini, L. Martein, and S. Schaible, “On maximizing a sum of ratios,” Journal of Information & Optimization Sciences, vol. 10, no. 1, pp. 65–79, 1989. View at: Publisher Site  Google Scholar  Zentralblatt MATH  MathSciNet
 H. Konno, T. Kuno, and Y. Yajima, “Global minimization of a generalized convex multiplicative function,” Journal of Global Optimization, vol. 4, no. 1, pp. 47–62, 1994. View at: Publisher Site  Google Scholar  Zentralblatt MATH  MathSciNet
 H. Konno and H. Yamashita, “Minimizing sums and products of linear fractional functions over a polytope,” Naval Research Logistics, vol. 46, no. 5, pp. 583–596, 1999. View at: Google Scholar  Zentralblatt MATH  MathSciNet
 M. Dür, R. Horst, and N. Van Thoai, “Solving sumofratios fractional programs using efficient points,” Optimization, vol. 49, no. 56, pp. 447–466, 2001. View at: Publisher Site  Google Scholar  MathSciNet
 H. Konno and K. Fukaishi, “A branch and bound algorithm for solving low rank linear multiplicative and fractional programming problems,” Journal of Global Optimization, vol. 18, no. 3, pp. 283–299, 2000. View at: Publisher Site  Google Scholar  Zentralblatt MATH  MathSciNet
 I. Quesada and I. E. Grossmann, “A global optimization algorithm for linear fractional and bilinear programs,” Journal of Global Optimization, vol. 6, no. 1, pp. 39–76, 1995. View at: Publisher Site  Google Scholar  Zentralblatt MATH  MathSciNet
 T. Kuno, “A branchandbound algorithm for maximizing the sum of several linear ratios,” Journal of Global Optimization, vol. 22, no. 1–4, pp. 155–174, 2002. View at: Publisher Site  Google Scholar  Zentralblatt MATH  MathSciNet
 H. P. Benson, “A simplicial branch and bound dualitybounds algorithm for the linear sumofratios problem,” European Journal of Operational Research, vol. 182, no. 2, pp. 597–611, 2007. View at: Publisher Site  Google Scholar  Zentralblatt MATH  MathSciNet
 N. T. H. Phuong and H. Tuy, “A unified monotonic approach to generalized linear fractional programming,” Journal of Global Optimization, vol. 26, no. 3, pp. 229–259, 2003. View at: Publisher Site  Google Scholar  Zentralblatt MATH  MathSciNet
 S. Schaible and J. Shi, “Fractional programming: the sumofratios case,” Optimization Methods & Software, vol. 18, no. 2, pp. 219–229, 2003. View at: Publisher Site  Google Scholar  Zentralblatt MATH  MathSciNet
 H. D. Sherali, “Global optimization of nonconvex polynomial programming problems having rational exponents,” Journal of Global Optimization, vol. 12, no. 3, pp. 267–283, 1998. View at: Publisher Site  Google Scholar  Zentralblatt MATH  MathSciNet
 H. D. Sherali and C. H. Tuncbilek, “New reformulation linearization/convexification relaxations for univariate and multivariate polynomial programming problems,” Operations Research Letters, vol. 21, no. 1, pp. 1–9, 1997. View at: Publisher Site  Google Scholar  Zentralblatt MATH  MathSciNet
 N. Z. Shor, Nondifferentiable Optimization and Polynomial Problems, Kluwer Academic Publishers, Dordrecht, The Netherlands, 1992.
 J. B. Lasserre, “Global optimization with polynomials and the problem of moments,” SIAM Journal on Optimization, vol. 11, no. 3, pp. 796–817, 2000/01. View at: Publisher Site  Google Scholar  MathSciNet
 J. B. Lasserre, “Semidefinite programming versus LP relaxations for polynomial programming,” Mathematics of Operations Research, vol. 27, no. 2, pp. 347–360, 2002. View at: Publisher Site  Google Scholar  Zentralblatt MATH  MathSciNet
 P. Shen, “Linearization method of global optimization for generalized geometric programming,” Applied Mathematics and Computation, vol. 162, no. 1, pp. 353–370, 2005. View at: Publisher Site  Google Scholar  Zentralblatt MATH  MathSciNet
 H. Tuy, Convex Analysis and Global Optimization, vol. 22, Kluwer Academic Publishers, Dordrecht, The Netherlands, 1998. View at: MathSciNet
 R. Horst and H. Tuy, Global Optimization: Deterministic Approaches, Springer, Berlin, Germany, 2nd edition, 1993. View at: MathSciNet
Copyright
Copyright © 2014 XueGang Zhou and BingYuan Cao. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.