/ / Article

Research Article | Open Access

Volume 2016 |Article ID 1304954 | 8 pages | https://doi.org/10.1155/2016/1304954

# A Linearized Relaxing Algorithm for the Specific Nonlinear Optimization Problem

Accepted26 Apr 2016
Published02 Jun 2016

#### Abstract

We propose a new method for the specific nonlinear and nonconvex global optimization problem by using a linear relaxation technique. To simplify the specific nonlinear and nonconvex optimization problem, we transform the problem to the lower linear relaxation form, and we solve the linear relaxation optimization problem by the Branch and Bound Algorithm. Under some reasonable assumptions, the global convergence of the algorithm is certified for the problem. Numerical results show that this method is more efficient than the previous methods.

#### 1. Introduction

Optimization problems appeared in many subjects , for example, technology  and economy . There is a long history of creating the method for solving the problem . We consider the following certain nonlinear optimization problem on the set .

Let be natural numbers, let be nonzero real constants, and let be real constants. Then we put the four kinds of functions on :Let be two secondly differentiable functions satisfying the following conditions:

We consider the following nonlinear optimization problem on : We propose a specific nonlinear and nonconvex optimization technique for . It is generalized by Jiao et al., 2013 .

In the previous our work , we treat the same problem applying Pei-Ping and Gui-Xia’s . The method needs to add the new valuables and takes a long time to solve the optimal problem.

Jiao et al. propose the technique which does not launch new ones for the following problems: We generalize the problem to and use Hongwei’s idea [4, 16] to solve the problem ; that is, we propose the new method by generalizing Hongwei’s method.

Firstly, we transform to linear relaxation problem of it. Secondly, we obtain the approximate value by Simplex method and Branch and Bound Algorithm [17, 18]. For advance preparation of the linearization, we transform the valuables . Let , , and .

We denoteAccordingly, we obtain the equivalence problem of :As the function “exp” is convex function, we find the lower and upper bounded linearized function of it.

In Section 2, we show how to linearize the original problem . In Section 3, we present our method by using the Branch and Bound Algorithm. In Section 4, we prove the convergence of the algorithm. In Section 5, we treat numerical experiments.

#### 2. Linear Relaxation Programing

In this section, we show how to transform to the linear relaxation problem.

We define

Corresponding to the transformation of coordinates, the domain is changed from to , as follows:Since all are convex, there exist the lower and upper bounded linear functions for them. We denote these functions by and .

When is negative, we can define the lower linearized function :Since each is continuous and differentiable on , there exists such thatby the mean value theorem.

Since is monotonic function on , there exists the inverse function of . Hence is uniquely given such thatWhen is positive, we define :For the upper linearized function of , we define as follows.

When is negative, we define

When is positive, we define :As the above definitions, we have the lower and upper linearized functions of ; that is, and , , , , , and are also defined for , , and as the same method. Moreover, we can assume , , , and by adding some constraints.

Now, we define the new valuables and . Let us consider the lower linearized functions , for , . We suppose as follows.

Case 1 (). In the case, we put the valuable ; that is, , .
As , .
When , we define the lower linearized functions of as follows:Incidentally, as is continuous and differentiable on , there exists such that by the mean value theorem.
Since is monotonic function on , there exists the inverse function of . Hence is uniquely given such that When , we define the lower linear function as the following:Similarly, is defined as above.

Case 2 (). In the case, we put the valuable ; that is, and .
As , .
When , we define the linear functions ; that is,When , we define as follows: is also defined as the above.

We have the lower bounded linearized optimization problem of ; that is, We rewrite our problem putting some technical assumption.

We assume , , , and , and the problem is where on and on satisfied the following condition:

#### 3. Branch and Bound Algorithm

In this section, we use the Simplex method and the Branch and Bound Algorithm and show how to find the approximate value of .

We set the initial domain , the active domain set , and the active domain , where is the times of the cutting domains and the number of the stages in the algorithm and is the number of the active domains on stage . If is active domain, we divide into half domains . On each domain, we linearize the problem and solve the linearized problems ) to obtain the lower and upper bound values of . After the repeat of the above calculations, we obtain the convergence for the sequences of the lower and upper bound values. The procedure leads the optimal value and the optimal solution for our problem.

##### 3.1. Branching Rule

We select the branching variable such that . We divide the interval into half intervals: and .

##### 3.2. Algorithm Statement

Step 0. Let be 0, and let be 1. We set an appropriate -value as a convergence tolerance, the initial upper bound , and . We solve , and we write and for the linear optimal solution and optimal value. If is feasible for , update and we set the initial lower bound . If , then we get the -approximate optimal value and optimal solution of , so we stop this algorithm. Otherwise, we proceed to Step .

Step 1. For all , we divide into two half domains and according to above branching rule.

Step 2. For all and each domain , we calculatewhere are defined on : If there is that satisfies for some , the domain is infeasible for . In the case, we delete the domain from . If are deleted for all , then the problem has no feasible solution.

Step 3. For left domains, we solve by the Simplex algorithm, and we write for the obtained linear optimal solution and the value. If is feasible for , we update . If , we delete the corresponding domain from . If , we obtain the -approximate optimal value and optimal solution of . Hence we stop this algorithm. Otherwise, we proceed to Step .

Step 4. We update the index of left domains to . We initialize and settle that is a set of and go to Step .

#### 4. Convergence of the Optimization Method

In this section, we prove the following two theorems to guarantee the convergence of our optimization method.

Theorem 1. If , then One proves convergence (21). The convergence of (22) is proved by the same procedure as (21).

Proof. We show the following.
If , then for each . Consider We prove the convergence of the 4 terms of the above.
(i) The proof of   is as follows.
When , we define the lower linearized function of ; that is,Then We put , . It isIf , then . Hence   .
When , we define as follows:When , then . Then   .
(ii) The proof of   is as follows.
We show that if .
By the definition of , we show that for any similarly to (i).
(iii) The proof of   is as follows.
We prove that if .
By the definition of , we show that for any similarly to (i).
(iv) The proof of   is as follows.
If , then .

Theorem 2. Suppose that problem has a global optimal solution, denoted by . Then one has the following.
(i) For the case : the algorithm always terminates after finitely many iterations yielding a global -optimal solution and a global -optimal value for problem in the sense that , with .
(ii) For the case : one assumes the sequence is convergence tolerance, such that ; that is, . And we assume the sequence is optimal solution of corresponding to . Then the accumulation point of is global optimal solution of .

Proof. (i) It is obvious by the algorithm statement.
(ii) We denote that the upper bound corresponding to is :Then is the point sequence on bounded closed set, and has a converge subsequence .
We denote , and thenNow is a monotone decreasing sequence; therefore it is convergent. We put :Since is a continuous function, . Therefore, , and . Since , for each , and is continuous, we obtain that .

#### 5. Numerical Experiment

In this section, we show some numerical experiments for these optimization problems according to the former rules. We make the algorithm coded with Matlab. In these codes, we use Matlab’s unique function code “linprog” to solve the linear optimization problems.

Example 1. ConsiderWe set . After the algorithm, we found a global -optimal value when the global -optimal solution is .

Example 2. ConsiderWe set . After the algorithm, we found a global -optimal value when the global -optimal solution is .

Example 3. ConsiderWe set . After the algorithm, we found a global -optimal value when the global -optimal solution is .

#### 6. Concluding Remarks

In this paper, we propose the specific nonlinear and nonconvex optimization technique which does not launch new valuables applying Hongwei’s method . We compute the examples of our previous work  by the new method. In , it had taken over 8 hours to find optimal value for each problem. The proposed method can compute the same problems in 10 minutes. If the algorithms are coded by C or C++, we obtain the optimal value in a shorter time.

#### Competing Interests

The authors declare that there are no competing interests regarding the publication of this paper.

#### Acknowledgments

The authors would like to thank Y. Yamamoto for arranging the system of computers.

1. P. Shen, Y. Duan, and Y. Ma, “A robust solution approach for nonconvex quadratic programs with additional multiplicative constraints,” Applied Mathematics and Computation, vol. 201, no. 1-2, pp. 514–526, 2008.
2. C. A. Floudas and C. E. Gounaris, “A review of recent advances in global optimization,” Journal of Global Optimization, vol. 45, no. 1, pp. 3–38, 2009. View at: Publisher Site | Google Scholar | MathSciNet
3. Y. Ji, K.-C. Zhang, and S.-J. Qu, “A deterministic global optimization algorithm,” Applied Mathematics and Computation, vol. 185, no. 1, pp. 382–387, 2007.
4. H. Jiao, Z. Wang, and Y. Chen, “Global optimization algorithm for sum of generalized polynomial ratios problem,” Applied Mathematical Modelling, vol. 37, no. 1-2, pp. 187–197, 2013. View at: Publisher Site | Google Scholar | MathSciNet
5. S. Pei-Ping and Y. Gui-Xia, “Global optimization for the sum of generalized polynomial fractional functions,” Mathematical Methods of Operations Research, vol. 65, no. 3, pp. 445–459, 2007. View at: Publisher Site | Google Scholar | MathSciNet
6. M. Chiang, “Nonconvex optimization for communication networks,” in Advances in Applied Mathematics and Global Optimization, vol. 17 of Applied Mathematics and Mechanics, pp. 137–196, Springer, New York, NY, USA, 2009. View at: Publisher Site | Google Scholar | MathSciNet
7. J.-W. Lee, R. R. Mazumdar, and N. B. Shroff, “Non-convex optimization and rate control for multi-class services in the internet,” IEEE/ACM Transactions on Networking, vol. 13, no. 4, pp. 827–840, 2005. View at: Publisher Site | Google Scholar
8. D. Cai and T. G. Nitta, “Limit of the solutions for the finite horizon problems as the optimal solution to the infinite horizon optimization problems,” Journal of Difference Equations and Applications, vol. 17, no. 3, pp. 359–373, 2011. View at: Publisher Site | Google Scholar | MathSciNet
9. D. Cai and T. G. Nitta, “Optimal solutions to the infinite horizon problems: constructing the optimum as the limit of the solutions for the finite horizon problems,” Nonlinear Analysis: Theory, Methods & Applications, vol. 71, no. 12, pp. e2103–e2108, 2009. View at: Publisher Site | Google Scholar | MathSciNet
10. R. Okumura, D. Cai, and T. G. Nitta, “Transversality conditions for infinite horizon optimality: higher order differential problems,” Nonlinear Analysis: Theory, Methods & Applications, vol. 71, no. 12, pp. e1980–e1984, 2009. View at: Publisher Site | Google Scholar | MathSciNet
11. H. Konno and H. Watanabe, “Bond portfolio optimization problems and their applications to index tracking: a partial optimization approach,” Journal of the Operations Research Society of Japan, vol. 39, no. 3, pp. 295–306, 1996.
12. H. P. Benson, “Using concave envelopes to globally solve the nonlinear sum of ratios problem,” Journal of Global Optimization, vol. 22, no. 1, pp. 343–364, 2002. View at: Publisher Site | Google Scholar | MathSciNet
13. H. Konno and K. Fukaishi, “A branch and bound algorithm for solving low rank linear multiplicative and fractional programming problems,” Journal of Global Optimization, vol. 18, no. 3, pp. 283–299, 2000. View at: Publisher Site | Google Scholar | MathSciNet
14. N. T. H. Phuong and H. Tuy, “A unified monotonic approach to generalized linear fractional programming,” Journal of Global Optimization, vol. 26, no. 3, pp. 229–259, 2003. View at: Publisher Site | Google Scholar | MathSciNet
15. M. Horai, H. Kobayashi, and T. G. Nitta, “Global optimization for the sum of certain nonlinear functions,” Abstract and Applied Analysis, vol. 2014, Article ID 408918, 8 pages, 2014. View at: Publisher Site | Google Scholar | MathSciNet
16. H. Jiao and Y. Chen, “A note on a deterministic global optimization algorithm,” Applied Mathematics and Computation, vol. 202, no. 1, pp. 67–70, 2008.
17. Y. Gao, Y. Shang, and L. Zhang, “A branch and reduce approach for solving nonconvex quadratic programming problems with quadratic constraints,” OR Transactions, vol. 9, no. 2, pp. 9–20, 2005. View at: Google Scholar
18. J. Linderoth, “A simplicial branch-and-bound algorithm for solving quadratically constrained quadratic programs,” Mathematical Programming, vol. 103, no. 2, pp. 251–282, 2005. View at: Publisher Site | Google Scholar | MathSciNet Download other formatsMore  Order printed copiesOrder  Sign up for content alertsSign up