#### Abstract

This paper presents a global optimization method for solving general nonlinear programming problems subjected to box constraints. Regardless of convexity or nonconvexity, by introducing a differential flow on the dual feasible space, a set of complete solutions to the original problem is obtained, and criteria for global optimality and existence of solutions are given. Our theorems improve and generalize recent known results in the canonical duality theory. Applications to a class of constrained optimal control problems are discussed. Particularly, an analytical form of the optimal control is expressed. Some examples are included to illustrate this new approach.

#### 1. Introduction

In this paper, we consider the following general box constrained nonlinear programming problem (the primal problem in short): where is a feasible space, are two given vectors, and is twice continuously differentiable in . Here, we discuss the primal problem involving nonconvexity or convexity in the objective function.

Problem (1.1) appears in many applications, such as engineering design, phase transitions, chaotic dynamics, information theory, and network communication [1, 2]. Particularly, if and , the problem leads to one of the fundamental problems in combinatorial optimization, namely, the integer programming problem [3]. By the fact that the feasible space is a closed convex subset of , the primal problem has at least one global minimizer. When is a convex programming problem, a global minimizer can be obtained by many well-developed nonlinear optimization methods based on the Karush-Kuhn-Tucker (or simply KKT ) optimality theory [4]. However, for with nonconvexity in the objective function, traditional KKT theory and direct methods can only be used for solving to local optimality. So, our interest will be mainly in the case of being nonconvex on in this paper. For special cases of minimizing a nonconvex quadratic function subject to box constraints, much effort and progress have been made on locating the global optimal solution based on the canonical duality theory by Gao (see [5–7] for details). As indicated in [8], the key step of the canonical duality theory is to introduce a canonical dual function, but commonly used methods are not guaranteed to construct it since the general form of the objective function given in (1.1). Thus, there has been comparatively little work in global optimality for general cases.

Inspired and motivated by these facts, a differential flow for constructing the canonical dual function is introduced and a new approach to solve the general (especially nonconvex) nonlinear programming problem is investigated in this paper. By means of the canonical dual problem, some conditions in global optimality are deduced, and global and local extrema of the primal problem can be identified. An application to the linear-quadratic optimal control problem with constraints is discussed. These results presented in this paper can be viewed as an extension and an improvement in the canonical duality theory [8–10].

The paper is organized as follows. In Section 2, a differential flow is introduced to present a general form of the canonical dual problem to . The relation of this transformation with the classical Lagrangian method is discussed. In Section 3, we present a set of complete solutions to by the way presented in Section 2. The existence of the canonical dual solutions is also given. We give an analytic solution to the box-constrained optimal control problem via canonical dual variables in Section 4. Meanwhile, some examples are used to illustrate our theory.

#### 2. A Differential Flow and Canonical Dual Problem

In the beginning of this paper, we have mentioned that our primal goal is to find the global minimizers to a general (mainly nonconvex) box-constrained optimization problem . Due to the assumed nonconvexity of the objective function, the classical Lagrangian is no longer a saddle function, and the Fenchel-Young inequality leads to only a weak duality relation: . The nonzero value is called the *duality gap*, where possibly, . This duality gap shows that the well-developed Fenchel-Moreau-Rockafellar duality theory can be used mainly for solving convex problems. Also, due to the nonconvexity of the objective function, the problem may have multiple local solutions. The identification of a global minimizer has been a fundamentally challenging task in global optimization. In order to eliminate this duality gap inherent in the classical Lagrange duality theory, a so-called *canonical duality theory* has been developed [2, 9]. The main idea of this new theory is to introduce a *canonical dual transformation* which may convert some nonconvex and/or nonsmooth primal problems into smooth *canonical dual problems* without generating any duality gap and deduce some global solutions. The key step in the canonical dual transformation is to choose the (nonlinear) geometrical operator . Different forms of may lead to different (but equivalent) canonical dual functions and canonical dual problems. So far, in most literatures related, the canonical dual transformation is discussed and the canonical dual function is formulated in quadratic minimization problems (i.e., the objective function is the quadratic form). However, for the general form of the objective function given in (1.1), in general, it lacks effective strategies to get the canonical dual function (or the canonical dual problem) by commonly used methods. The novelty of this paper is to introduce the *differential flow* created by differential equation (2.6) to construct the canonical dual function for the problem . Lemma 2.5 guarantees the existence of the differential flow; Theorem 2.3 shows that there is no duality gap between the primal problem and its canonical dual problem given in (2.7) via the differential flow; Meanwhile, Theorems 3.1–3.4 use the differential flow to present a global minimizer. In addition, the idea to introduce the set of shift parameters is closely following the works by Floudas et al. [11, 12]. In [12], they developed a global optimization method, , for general twice-differentiable constrained optimizations proposing to utilize some parameter to generate valid convex under estimators for nonconvex terms of generic structure.

The main idea of constructing the differential flow and the canonical dual problem is as follows. For simplicity without generality, we assume that , namely, , where for all .

Let denote the dual feasible space where , and is a diagonal matrix with , as its diagonal entries.

Lemma 2.1. *The dual feasible space is an open convex subset of . If , then for any .*

*Proof. *Notice that is twice continuously differentiable in . For any , the Hessian matrix is a symmetrical matrix. We know that for any given , is a convex set. By the fact that the intersection of any collection of convex sets is convex, the dual feasible space is an open convex subset of . In addition, it follows from the definition of that for any . This completes the proof.

Suppose that and a nonzero vector satisfy A differential flow is introduced over a relative small neighborhood of such that which is equivalent to where is the Jacobian of and is a matrix whose th entry is equal to the partial derivative . Here, we hope to preserve invertibility of the matrix on the choice of the neighborhood of .

Let Then, a differential flow can be defined by the following differential system: where is the th entry of . Based on the Extension theory [13, 14], the solution of the differential system (2.6) can be extended to a space in . The canonical dual function is defined as Thus, the canonical dual problem for our primal problem can be proposed as follows In the following, we show that is canonically (i.e., with zero duality gap) dual to .

Lemma 2.2. *Let be a given flow defined by (2.6), and be the corresponding canonical dual function defined by (2.7). For any , we have
*

*Proof. *Since is differentiable, for any ,
It follows from the process (2.2)–(2.6) that
From (2.6), we have . By (2.11), then
By the definition of , this concludes the proof of Lemma 2.2.

By Lemma 2.2, the canonical dual function is concave on . For a critical point, , must be a global maximizer of , and it can be solved by many well-developed nonlinear programming methods. If and , we have , and for any , by negative definiteness of , Thus for any , will stay in and .

Theorem 2.3. *The canonical dual problem is perfectly dual to the primal problem in the sense that if is a KKT point of , then the vector is a KKT point of and
*

*Proof. *By introducing the Lagrange multiplier vector to relax the inequality constraint in , the Lagrangian function associated with becomes .Then the KKT conditions of become
Notice that . It follows from conditions (2.15) that satisfies the complementary conditions of . By the definition of the flow , the equation holds. This proved that if is a KKT point of , then the vector defined by (2.6) is a KKT point of the primal problem .

In addition, we have
This completes the proof.

*Remark 2.4. *Theorem 2.3 shows that by using the canonical dual function (2.7), there is no duality gap between the primal problem and its canonical dual , that is, . It eliminates this duality gap inherent in the classical Lagrange duality theory and provides necessary conditions for searching global solutions. Actually, we replace in with the space , in the proof of Theorem 2.3. Moreover, the inequality of in is essentially not a constraint as indicated in [5].

Due to introduceing a differential flow , the constrained nonconvex problem can be converted to the canonical (perfect) dual problem, which can be solved by deterministic methods. In view of the process (2.2)–(2.6), the flow is based on the KKT (2.2). In other words, we can solve equation (2.2) backwards from to get the backward flow . Then, it is of interest to know whether there exists a pair satisfying (2.2).

Lemma 2.5. * Suppose that . For the primal problem , there exist a point and a nonzero vector such that .*

*Proof. *Since is bounded and is twice continuously differentiable in , we can choose a large positive real such that and ( is an vector of all ones). Then, it is easy to verify that at the point , and at the point .

Notice that the function is continuous and differentiable in . It follows from differential and integral calculus that there is a nonzero stationary point such that . Let . Thus, there exist a point and a nonzero vector satisfying (2.2). This completes the proof.

*Remark 2.6. * Actually, Lemma 2.5 gives us some information to search the desired parameter . From Lemma 2.5, we only need to choose a large positive real such that and . Since , then it follows from uniformly in that there is a unique nonzero fixed point such that
which is equivalent to by Brown fixed-point theorem. In [11, 12], some good algorithms are given to estimate the bounds of . If there is a positive real number such that , then a properly large parameter can be obtained by the inequalities
uniformly on for us to use Brown fixed-point theorem. We should choose
Finally, let which is the desired parameter for us. We will discuss to calculate the parameter in detail by the use of the results in [15, 16] with the future works.

*Remark 2.7. * Moreover, for the proper parameter , it is worth investigating how to get the solution of (2.2) inside of . For this issue, when is a polynomial, we may be referred to [17]. There are results in [17] on bounding the zeros of a polynomial. We may consider for a given bounds to determine the parameter by the use of the results in [17] on the relation between the zeros and the coefficients. We will discuss it with the future works as well. However, the KKT conditions are only necessary conditions for local minimizers to satisfy for the nonconvex case of . To identify, a global minimizer among all KKT points remains a key task for us to address in the next section.

#### 3. Complete Solutions to Global Optimization Problems

Theorem 3.1. * Suppose that is a KKT point of the canonical dual function and defined by (2.6). If , then is a global maximizer of on , and is a global minimizer of on and *

*Proof. *If is a KKT point of on , by (2.15), stays in , that is, for all . By Lemma 2.1 and 2.2, it is easy to verify that and for any .

For any given parameter , (), we define the function as follows:
It is obvious that for all . Since is twice continuously differentiable in , there exists a closed convex region containing such that on ,
This implies that is the unique global minimizer of over . By (2.7), we have
Thus, for any ,
On the other hand, by the fact that the canonical dual function is concave on , must be a global maximizer of on , and we have
and for all ,

Thus, is a global minimizer of on and
This completes the proof.

*Remark 3.2. * Theorem 3.1 shows that a vector is a global minimizer of the problem if is a critical solution of . However, for certain given , the canonical dual function might have no critical point in . For example, the canonical dual solutions could locate on the boundary of . In this case, the primal problem may have multiple solutions.

In order to study the existence conditions of the canonical dual solutions, we let denote the boundary of .

Theorem 3.3. * Suppose that is a given twice continuously differentiable function, and . If for any given and ,
**
then the canonical dual problem has a critical point , and is a global minimizer to .*

*Proof. *We first show that for any given ,
Notice that there exist a point and a nonzero vector . For any given , the inequality always holds as becomes large enough. Then for any , it follows from Lemma 2.1 and 2.2 that and stays in , that is, . It means that there exists a large positive real such that for since is twice continuously differentiable in .

By Lemma 2.2, we have
where . For any , by the definition of , it is easy to see that , namely, monotonously decreases on . Moreover, since , we have by (3.11). Then, is monotonously decreasing on . Thus, to prove (3.10), it is only needed to show that there exists a positive real such that
which implies that is strictly monotonously decreasing on .

Suppose that
at a point . Since and , the equation holds for some subscript , which means that . By positive definiteness of , we have and . One can verify that on since its monotonicity. Obviously, it is very easy to hold (3.12) if there does not exist satisfying (3.13). Consequently, there always exists a positive real such that is strictly monotonously decreasing on . It leads to the conclusion (3.10).

Since is concave and the condition (3.10) holds, if (3.9) holds, then the canonical dual function is coercive on the open convex set . Therefore, the canonical dual problem has one maximizer by the theory of convex analysis [4, 18]. This completes the proof.

Clearly, when on , the dual feasible space is equivalent to and by (2.1). Notice that for any given . Then is concave and coercive on , and has at least one maximizer on . In this case, it is then of interest to characterize a unique solution of by the dual variable.

Let

Theorem 3.4. * If on , the primal problem has a unique global minimizer determined by satisfying
*

*Proof. *To prove Theorem 3.4, by Theorem 2.3 and 3.1, it is only needed to prove that is a KKT point of in . By (3.14), the relations for all also hold. Since satisfies equations for all , we can verify that stays in and the complementarity conditions hold. Thus, is a KKT point of in by (2.9), (2.15), and is a unique global minimizer of . This completes the proof.

Before beginning of applications to optimal control problems, we present two examples to find global minimizers by differential flows.

*Example 3.5. * As a particular example of , let us consider the following one dimensional nonconvex minimization problem with a box:
We have . By choosing , we solve the following equation in :
to get a solution . Next we solve the following boundary value problem of the ordinary differential equation:
To find a parameter such that
we get
which satisfies
Let be denoted by . To find the value of , we compute the solution of the following algebra equation:
and get . It follows from Theorem 3.1 that is a global minimizer of Example 3.5.

*Remark 3.6. * In this example, we see that a differential flow is useful in solving a nonconvex optimization problem. For the global optimization problem, people usually compute the global minimizer numerically. Even in using canonical duality method, one has to solve a canonical dual problem numerically. Nevertheless, the differential flow directs us to a new way for finding a global minimizer. Particularly, one may expect an exact solution of the problem provided that the corresponding differential equation has an analytic solution.

*Example 3.7. *Given a symmetric matrix and a vector . Let . We consider the following box-constrained nonconvex global optimization:
Since is an indefinite matrix, we choose a large such that and . We see that the differential equation is
where . It leads a differential flow
For simplicity without generality, we assume that and . If for , , we have
Then the dual problem can be formulated as

If we choose and , this dual problem has only one KKT point . By Theorem 3.1, is a global minimizer of Example 3.7 and .

#### 4. Applications to Constrained Optimal Control Problems

In this section, we consider the following constrained linear-quadratic optimal control problem:
where are positive semidefinite and positive definite symmetric matrices respectively, is a state vector, and is integrable or piecewise continuous on within . Simply, , and is a unit *box*. Problems of the above type arise naturally in system science and engineering with wide applications [19, 20].

It is well known that the central result in the optimal control theory is the *Pontryagin maximum principle* providing necessary conditions for optimality in very general optimal control problems.

##### 4.1. Pontryagin Maximum Principle

Define the Hamilton-Jacobi-Bellman function If the control is an optimal solution for the problem (4.1), with and denoting the state and costate corresponding to , respectively, then is an extremal control, that is, we have and ,

Unfortunately, above conditions are not, in general, sufficient for optimality. In such a case, we need to go through the process of comparing all the candidates for optimality that the necessary conditions produce, and picking out an optimal solution to the problem. Nevertheless, Lemma 4.1 can prove that the solution satisfies sufficiency conditions of the type considered in this section, then these conditions will ensure the optimality of the solution.

Lemma 4.1. * Let be an admissible control, and be the corresponding state and costate. If , , and satisfy the Pontryagin maximum principle ((4.3)-(4.4)), then is an optimal control to the problem (4.1).*

*Proof. *For any given , let
For any , by the definition of , , and is equivalent to the following global optimization
Moreover, we can derive an analytic form of the global minimizer for (4.6) via the co-state . It is easy to see that the minimizer of (4.6) doesn't depend on , that is, which implies that
Since is a closed convex set, by the classical linear systems theory, the state set of (4.1) is a convex subset of . By the fact that the minimizer does not depend on and the convexity of the integrand in the cost functional, the function is convex with respect to over . In other words, for any , and ,
Thus, for any admissible pair , and , by (4.5), we have
which leads to
Notice that and . We can obtain
This means that attains its minimum at . The proof is completed.

Lemma 4.1 reformulates the constrained optimal control problem (4.1) into a global optimization problem (4.6). Based on Theorem 3.4, an analytic solution of (4.1) can be expressed via the co-state.

Theorem 4.2. * Suppose that
**
We have the following expression
**
where with respect to the co-state is given by satisfying
*

*Proof. *The proof of Theorem 4.2 is parallel to Theorem 3.4.

Substituting into (4.3), we have If is a solution of the above equations (4.15), let By Lemma 4.1, satisfy the Pontryagin maximum principle, and we present an analytic form of the optimal control to (4.1) via the canonical dual variable

Next, we give an example to illustrate our results.

*Example 4.3. * We consider
and in (4.1). and satisfy the assumption in (4.1).

Following the idea of Lemma 4.1 and Theorem 4.2, we need to solve the following boundary value problem for differential equations to derive the optimal solution
By solving equations (4.19) in MATLAB, we can obtain the optimal control and the dual variable as follows (see Figures 1 and 2).

#### Acknowledgments

The authors would like to thank the referees for their helpful comments on the early version of this paper. This work was supported by the National Natural Science Foundation of China (no. 10971053).