Abstract and Applied Analysis

Volume 2014, Article ID 408918, 8 pages

http://dx.doi.org/10.1155/2014/408918

## Global Optimization for the Sum of Certain Nonlinear Functions

^{1}Faculty of Engineering, Graduate School of Engineering, Mie University, Kurimamachiyamachi, Tsu 514-8507, Japan^{2}Department of Education, Mie University, Kurimamachiyamachi, Tsu 514-8507, Japan

Received 13 April 2014; Revised 19 August 2014; Accepted 20 August 2014; Published 10 November 2014

Academic Editor: Julio D. Rossi

Copyright © 2014 Mio Horai et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

#### Abstract

We extend work by Pei-Ping and Gui-Xia, 2007, to a global optimization problem for more general functions. Pei-Ping and Gui-Xia treat the optimization problem for the linear sum of polynomial fractional functions, using a branch and bound approach. We prove that this extension makes possible to solve the following nonconvex optimization problems which Pei-Ping and Gui-Xia, 2007, cannot solve, that the sum of the positive (or negative) first and second derivatives function with the variable defined by sum of polynomial fractional function by using branch and bound algorithm.

#### 1. Introduction

The optimization problem is widely used in sciences, especially in engineering and economy [1–3]. In 2007, Pei-Ping and Gui-Xia considered one global optimization problem in [4]: where , , , are given generalized polynomial. One has

Sum of rations problems like attract a lot of attention, and the reason is that these problems are applied to various economical problems [4].

Pei-Ping and Gui-Xia proposed the method to solve these problems globally by using branch and bound algorithm in [4]. In the above problem, the objective function and constrained function are sums of generalized polynomial fractional functions. We extend these functions to more general functions like below: where , , , ; that is, where , , , are natural numbers, and , , , are real constants not zero, and , , , are real constants.

We assume that , are secondly differentiable functions and monotone increasing or monotone decreasing functions. We divide these functions to monotone increasing or monotone decreasing as follows:

Furthermore, we assume the following conditions for the second derivatives: To solve the above problem , we transform the problem to the equivalent problems , and transform into the linear relaxation problem. We prove the equivalency of the problems under above assumption, and we calculate the equivalent problem using branch and bound algorithm corresponding to [4–6].

For example, according to this extension at approach, we can calculate the following global optimization problem:

In this paper, we explain how to make equivalent relaxation linear problem from original problem in Section 2. In Section 3, we present the branch and bound algorithm and its convergence. In Section 4, we introduce numerical experiments result.

#### 2. Equivalence Transformation and Linear Relaxation

In this section we firstly transform the problem to the equivalent problems and secondly transform to . Thirdly we linearize the problem corresponding to [4].

##### 2.1. Translation of the Problem into

For the problem , we put new variables , , and , and the function and depending on , in the original problem : Since , , , are polynomials on closed interval , it is easy to calculate the minimums and maximums of the functions on ; we denote them by , , , , , , , .

Let be the closed interval: where .

Let be the following closed domain in ; that is, We give the problem on . Consider Now we obtain Theorem 1 that proves the equivalence of and .

Theorem 1. *The problem on is equivalent to the problem on .*

*Proof. *Let be the optimal solution for ; we denote
and then

Furthermore let be the optimal solution for . Then by the restricted condition we have the following: for , and ; that is, ; for , and ; that is, ; for and , and ; that is, ; for and , and ; that is, .

The conditions , or and , or lead to
Therefore we obtain
Now,
that is, satisfied constant for .

Since the optimal solution for is , we obtain
so
For the optimal solution of , we denote
then
The element satisfies the conditions for . Since is the optimal solution for , it satisfies .

Hence ; that is, the two problems are equivalent.

##### 2.2. Translation of the Problem into

We change the variables by the logarithmic function . Since , , , , are positive, we can write , , , , as are using new variables ; that is, , , , , and .

The closed domain corresponds to the following , where Using such transformation of variables, the objective function and the restricted functions of are changed to the following:

Now , , , , , , are represented as where is real number and satisfies or , and or .

Let be and let be .

Then the objective function and the restricted functions are changed functions which are changed to and .

Now we put Then the problem is transformed naturally to the following problem :

##### 2.3. Linearization of the Problem

The objective and restricted function for are nonlinear. On , we approximate to lower bounded linear functions, and we can transform into the linear optimization problem. The solution of it is lower bound of the optimal value on . We denote , , , , , , , , , as minimums and maximums for , , , , , , , , , .

And we denote ; that is,

Now, or , and or , and is monotonic convex function on . And there exist the upper and lower bounded linear functions ( and ) of .

We denote As is continuous on and differentiable on , there exists , such that by the mean value theorem.

Since or , is monotonic function on , there exists the inverse function of . Hence is uniquely decided, , such that and we define By the definition, .

For all , .

Let . Then is a linear function which is lower function for the convex envelope of on the rectangle.

is the linear problem of by the lower bounded function of : By the definition , any in satisfying the restricted condition of satisfy the restricted condition of .

Lemma 2. *The value of is less than the optimal value for the problem on .*

*Proof. *The definition of implies the statement naturally.

Lemma 3. *Assume that , and . For each and , and .*

*Proof. *Let and .

Since , the values and satisfy .

Hence, .

Now,
The function is concave on ; therefore attains the maximum value of

We denote
Since for ,
Thus
On the other hand is a convex function by the same argument, and we obtain the following for .

Lemma 4. *Under the same assumption of Lemma 3 for and each .*

*Proof. *Lemma 3 and the definitions , imply Lemma 4, standardly.

#### 3. Branch and Bound Algorithm and Its Convergence

In Section 2, we transformed the initial problem into the equivalent problem , and we make the linear relaxation problem of to find the approximate value of easily. Now we get it by using branch and bound algorithm.

##### 3.1. Branch and Bound Algorithm

We solve the linear relaxation problem on initial domain to get the linear optimal value as lower bound of and upper bound of . For preparing to separate the active domains, we let the active domains set be and active domain . presents the times of the cutting domains and stage number and presents the number of active domains on stage . If is active domain, we divide into half domains , and linearize on each domain and solve the linear problems. After the above calculations, we get the lower and upper bound value of . After the repeat calculations, we get the convergence for the sequences of the lower and upper bound values, and we get the optimal value and solution.

###### 3.1.1. Branching Rule

We denote that . We select the branching variable such that , and we divide the interval into half intervals: and .

###### 3.1.2. Algorithm Statement

*Step **0.* Firstly, we let be 0 and let be 1. And we set an appropriate -value as a convergence tolerance, the initial upper bound , and . We solve , and we denote the linear optimal solution and optimal value by and . If is feasible for , then update and we set the initial lower bound . If , then we get the -approximate optimal value and optimal solution of , so we stop this algorithm. Otherwise, we proceed to Step 1.

*Step **1.* For all , we divide to get two half domains, and , according to the above branching rule.

*Step **2.* For all and each domain , we calculate
where , , and are defined in Section 2.3.

If there is the that satisfy for some , is infeasible domain for , then we delete the domain from . If are all deleted for all , then the problem has no feasible solutions.

*Step **3.* For left domains, we compute , , , and as defined in Sections 2.2 and 2.3. We solve the by simplex algorithm, and we denote the obtained linear optimal solutions and values by . Then if is feasible for , we update . If , then delete the corresponding domains from . If , then we get the -approximate optimal value and optimal solution of , so we stop this algorithm. Otherwise, we proceed to Step 4.

*Step **4.* We update the index of left domains to ; then we initialize . And we settle that is a set of , and go to Step 1.

##### 3.2. The Convergence of the Algorithm

Corresponding to [4], we obtain the convergence of the algorithm (cf. [4]).

Theorem 5. *Suppose that problem has a global optimal solution, and let be the global optimal value of . Then one has the following:*(i)*for the case , the algorithm always terminates after finitely many iterations yielding a global -optimal solution and a global -optimal value for problem in the sense that
*(ii)*for the case , we assume the sequence is convergence tolerance, such that ; that is, . And we assume the sequence is optimal solution of corresponding to . Then the accumulation point of is global optimal solution of .*

* Proof. *(i) It is obvious by algorithm statement.

(ii) We assume that the upper bound corresponding to is :
is the point sequence on bounded closed set, so has a converge subsequence . We assume that ; then

is monotone decreasing sequence, so it converges. We assume that :
is continuous function, so . And ; that is, . For , . As is continuous, .

*4. Numerical Experiment*

*In this chapter, we show the numerical experiments of these optimization problems according to the former rules. We make the algorithms coded with MATLAB. In these codes, we use MATLAB’s unique function code “linprog” to solve the linear optimization problems.*

*Example 1. *Consider
We set . After the algorithm, we found a global -optimal value when the global -optimal solution is .

*Example 2. *Consider
We set . After the algorithm, we found a global -optimal value when the global -optimal solution is .

*Example 3. *Consider
We set . After the algorithm, we found a global -optimal value when the global -optimal solution is .

*Example 4. *Consider
We set . After the algorithm, we found a global -optimal value when the global -optimal solution is .

*5. Concluding Remarks*

*In this paper, we proved that we can solve the nonconvex optimization problems which [4] cannot solve that the sum of the positive (or negative) first and second derivatives function with the variable defined by sum of polynomial fractional function by using branch and bound algorithm.*

*Conflict of Interests*

*The authors declare that there is no conflict of interests regarding the publication of this paper.*

*Acknowledgments*

*The authors would like to thank S. Pei-Ping, H. Yaguchi, and S. Tsuyumine for their good suggestion and encouragement.*

*References*

- D. Cai and T. G. Nitta, “Limit of the solutions for the finite horizon problems as the optimal solution to the infinite horizon optimization problems,”
*Journal of Difference Equations and Applications*, vol. 17, no. 3, pp. 359–373, 2011. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet · View at Scopus - D. Cai and T. G. Nitta, “Optimal solutions to the infinite horizon problems: constructing the optimum as the limit of the solutions for the finite horizon problems,”
*Nonlinear Analysis: Theory, Methods & Applications*, vol. 71, no. 12, pp. e2103–e2108, 2009. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus - R. Okumura, D. Cai, and T. G. Nitta, “Transversality conditions for infinite horizon optimality: higher order differential problems,”
*Nonlinear Analysis: Theory, Methods & Applications*, vol. 71, no. 12, pp. e1980–e1984, 2009. View at Publisher · View at Google Scholar · View at Scopus - S. Pei-Ping and Y. Gui-Xia, “Global optimization for the sum of generalized polynomial fractional functions,”
*Mathematical Methods of Operations Research*, vol. 65, no. 3, pp. 445–459, 2007. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet · View at Scopus - H. Jiao, Z. Wang, and Y. Chen, “Global optimization algorithm for sum of generalized polynomial ratios problem,”
*Applied Mathematical Modelling*, vol. 37, no. 1-2, pp. 187–197, 2013. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus - Y. J. Wang and K. C. Zhang, “Global optimization of nonlinear sum of ratios problem,”
*Applied Mathematics and Computation*, vol. 158, no. 2, pp. 319–330, 2004. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus

*
*