Abstract

We propose a new method for the specific nonlinear and nonconvex global optimization problem by using a linear relaxation technique. To simplify the specific nonlinear and nonconvex optimization problem, we transform the problem to the lower linear relaxation form, and we solve the linear relaxation optimization problem by the Branch and Bound Algorithm. Under some reasonable assumptions, the global convergence of the algorithm is certified for the problem. Numerical results show that this method is more efficient than the previous methods.

1. Introduction

Optimization problems appeared in many subjects [13], for example, technology [47] and economy [810]. There is a long history of creating the method for solving the problem [1114]. We consider the following certain nonlinear optimization problem on the set .

Let be natural numbers, let be nonzero real constants, and let be real constants. Then we put the four kinds of functions on :Let be two secondly differentiable functions satisfying the following conditions:

We consider the following nonlinear optimization problem on : We propose a specific nonlinear and nonconvex optimization technique for . It is generalized by Jiao et al., 2013 [4].

In the previous our work [15], we treat the same problem applying Pei-Ping and Gui-Xia’s [5]. The method needs to add the new valuables and takes a long time to solve the optimal problem.

Jiao et al. propose the technique which does not launch new ones for the following problems: We generalize the problem to and use Hongwei’s idea [4, 16] to solve the problem ; that is, we propose the new method by generalizing Hongwei’s method.

Firstly, we transform to linear relaxation problem of it. Secondly, we obtain the approximate value by Simplex method and Branch and Bound Algorithm [17, 18]. For advance preparation of the linearization, we transform the valuables . Let , , and .

We denoteAccordingly, we obtain the equivalence problem of :As the function “exp” is convex function, we find the lower and upper bounded linearized function of it.

In Section 2, we show how to linearize the original problem . In Section 3, we present our method by using the Branch and Bound Algorithm. In Section 4, we prove the convergence of the algorithm. In Section 5, we treat numerical experiments.

2. Linear Relaxation Programing

In this section, we show how to transform to the linear relaxation problem.

We define

Corresponding to the transformation of coordinates, the domain is changed from to , as follows:Since all are convex, there exist the lower and upper bounded linear functions for them. We denote these functions by and .

When is negative, we can define the lower linearized function :Since each is continuous and differentiable on , there exists such thatby the mean value theorem.

Since is monotonic function on , there exists the inverse function of . Hence is uniquely given such thatWhen is positive, we define :For the upper linearized function of , we define as follows.

When is negative, we define

When is positive, we define :As the above definitions, we have the lower and upper linearized functions of ; that is, and , , , , , and are also defined for , , and as the same method. Moreover, we can assume , , , and by adding some constraints.

Now, we define the new valuables and . Let us consider the lower linearized functions , for , . We suppose as follows.

Case 1 (). In the case, we put the valuable ; that is, , .
As , .
When , we define the lower linearized functions of as follows:Incidentally, as is continuous and differentiable on , there exists such that by the mean value theorem.
Since is monotonic function on , there exists the inverse function of . Hence is uniquely given such that When , we define the lower linear function as the following:Similarly, is defined as above.

Case 2 (). In the case, we put the valuable ; that is, and .
As , .
When , we define the linear functions ; that is,When , we define as follows: is also defined as the above.

We have the lower bounded linearized optimization problem of ; that is, We rewrite our problem putting some technical assumption.

We assume , , , and , and the problem is where on and on satisfied the following condition:

3. Branch and Bound Algorithm

In this section, we use the Simplex method and the Branch and Bound Algorithm and show how to find the approximate value of .

We set the initial domain , the active domain set , and the active domain , where is the times of the cutting domains and the number of the stages in the algorithm and is the number of the active domains on stage . If is active domain, we divide into half domains . On each domain, we linearize the problem and solve the linearized problems ) to obtain the lower and upper bound values of . After the repeat of the above calculations, we obtain the convergence for the sequences of the lower and upper bound values. The procedure leads the optimal value and the optimal solution for our problem.

3.1. Branching Rule

We select the branching variable such that . We divide the interval into half intervals: and .

3.2. Algorithm Statement

Step 0. Let be 0, and let be 1. We set an appropriate -value as a convergence tolerance, the initial upper bound , and . We solve , and we write and for the linear optimal solution and optimal value. If is feasible for , update and we set the initial lower bound . If , then we get the -approximate optimal value and optimal solution of , so we stop this algorithm. Otherwise, we proceed to Step .

Step 1. For all , we divide into two half domains and according to above branching rule.

Step 2. For all and each domain , we calculatewhere are defined on : If there is that satisfies for some , the domain is infeasible for . In the case, we delete the domain from . If are deleted for all , then the problem has no feasible solution.

Step 3. For left domains, we solve by the Simplex algorithm, and we write for the obtained linear optimal solution and the value. If is feasible for , we update . If , we delete the corresponding domain from . If , we obtain the -approximate optimal value and optimal solution of . Hence we stop this algorithm. Otherwise, we proceed to Step .

Step 4. We update the index of left domains to . We initialize and settle that is a set of and go to Step .

4. Convergence of the Optimization Method

In this section, we prove the following two theorems to guarantee the convergence of our optimization method.

Theorem 1. If , then One proves convergence (21). The convergence of (22) is proved by the same procedure as (21).

Proof. We show the following.
If , then for each . Consider We prove the convergence of the 4 terms of the above.
(i) The proof of   is as follows.
When , we define the lower linearized function of ; that is,Then We put , . It isIf , then . Hence   .
When , we define as follows:When , then . Then   .
(ii) The proof of   is as follows.
We show that if .
By the definition of , we show that for any similarly to (i).
(iii) The proof of   is as follows.
We prove that if .
By the definition of , we show that for any similarly to (i).
(iv) The proof of   is as follows.
If , then .

Theorem 2. Suppose that problem has a global optimal solution, denoted by . Then one has the following.
(i) For the case : the algorithm always terminates after finitely many iterations yielding a global -optimal solution and a global -optimal value for problem in the sense that , with .
(ii) For the case : one assumes the sequence is convergence tolerance, such that ; that is, . And we assume the sequence is optimal solution of corresponding to . Then the accumulation point of is global optimal solution of .

Proof. (i) It is obvious by the algorithm statement.
(ii) We denote that the upper bound corresponding to is :Then is the point sequence on bounded closed set, and has a converge subsequence .
We denote , and thenNow is a monotone decreasing sequence; therefore it is convergent. We put :Since is a continuous function, . Therefore, , and . Since , for each , and is continuous, we obtain that .

5. Numerical Experiment

In this section, we show some numerical experiments for these optimization problems according to the former rules. We make the algorithm coded with Matlab. In these codes, we use Matlab’s unique function code “linprog” to solve the linear optimization problems.

Example 1. ConsiderWe set . After the algorithm, we found a global -optimal value when the global -optimal solution is .

Example 2. ConsiderWe set . After the algorithm, we found a global -optimal value when the global -optimal solution is .

Example 3. ConsiderWe set . After the algorithm, we found a global -optimal value when the global -optimal solution is .

6. Concluding Remarks

In this paper, we propose the specific nonlinear and nonconvex optimization technique which does not launch new valuables applying Hongwei’s method [6]. We compute the examples of our previous work [17] by the new method. In [17], it had taken over 8 hours to find optimal value for each problem. The proposed method can compute the same problems in 10 minutes. If the algorithms are coded by C or C++, we obtain the optimal value in a shorter time.

Competing Interests

The authors declare that there are no competing interests regarding the publication of this paper.

Acknowledgments

The authors would like to thank Y. Yamamoto for arranging the system of computers.