Abstract

We present a global error bound for the projected gradient of nonconvex constrained optimization problems and a local error bound for the distance from a feasible solution to the optimal solution set of convex constrained optimization problems, by using the merit function involved in the sequential quadratic programming (SQP) method. For the solution sets (stationary points set and points set) of nonconvex constrained optimization problems, we establish the definitions of generalized nondegeneration and generalized weak sharp minima. Based on the above, the necessary and sufficient conditions for a feasible solution of the nonconvex constrained optimization problems to terminate finitely at the two solutions are given, respectively. Accordingly, the results in this paper improve and popularize existing results known in the literature. Further, we utilize the global error bound for the projected gradient with the merit function being computed easily to describe these necessary and sufficient conditions.

1. Introduction

This paper is concerned with the following constrained optimization problem: where is a continuously differentiable function and , for , are continuously differentiable convex functions.

It is well known that SQP is an important method for solving the problem . Its essential idea is to approximate the solutions of the problem by using the optimal solutions of a series of quadratic programming. The solutions here may be referred to as the stationary points, points, or optimal solutions of the problem .

Suppose that is a current iterate point of the problem . Generally, during the iterative process of the SQP method, the next iterate point is generated by solving a subproblem as follows: where and are gradients of and , respectively, and is a symmetric positive definite matrix. The matrix is modified and selected again along with the iterative process.

For the matrix , we assume in this paper that there exist positive numbers and such that where stands for the Euclidean norm.

Due to the convexity of and , it is easy to see that for all .

Consider the following function: which is referred to as a merit function.

It is well known that the SQP method has wide and valid application for solving optimization problems (see [17]). However, the aim of this paper is not to propose more algorithms or computing skills for SQP methods but to study the global error bounds of projected gradient for nonconvex problems and the local error bounds of the distance from a feasible solution to the optimal solution set for convex problems , with these error bounds by means of the merit function being provided. This is one of the main contents of this paper.

The theory of error bounds has attracted a lot of attention and many good results have been obtained. In particular, [8, 9] established many types of global error bounds for monotone affine variational inequality problems; [1012] developed global -type error bounds and global error bounds in for strongly monotone variational inequality problems; [13, 14] obtained global -type error bounds for generalized linear complementarity problems and monotone variational inequality problems. However, to the best of our knowledge, there is not much work concerning the error bounds in SQP methods. This motivates us to do this research.

It should be noted that the merit function considered here is different from the regular gap function (another kind of merit function) discussed in [10, 11, 15, 16], since is generated over a polyhedral set obtaining , rather than over itself. The main advantage of this modification is that the point of the problem is equivalent to . In addition, the computation of is easier than the regular gap function.

Another main contribution of this paper is the finite termination of a feasible solution sequence; that is, the feasible solution sequence converges finitely to the solution sets (stationary points set and point set) of problem . This research has received considerable attention (see [1725]). Among these, under the assumption that the solution set is weak sharp minima or nondegeneration, [2025] studied the finite termination, respectively; [1719] discussed, under the stronger condition, finite termination for some new efficient algorithms. It is worth mentioning that Burke and Ferris (see [20]), under the solution set satisfying the condition of weak sharp minima and other hypotheses for convex optimization problems, put forward the necessary and sufficient condition by which a feasible solution sequence converges finitely to the solution set (see [20, Theorem 4.7]). Afterwards, under the same conditions, [21] verified the conclusion in Pseudomonotone+ variational inequality problems. Recently, [25, Theorem 2] simplified the conditions of [20, Theorem 4.7] and confirmed that [20, Theorem 4.7] is efficacious only under the solution set satisfying the condition of weak sharp minima. However, for nonconvex optimization problems , when its solution set satisfies the condition of weak sharp minima or nondegeneration, establishing the necessary and sufficient condition of a feasible solution sequence to converge finitely to the solution set is undoubtedly of great significance to solve this problem, but up to now we have not seen the research literature on this issue.

In this paper, inspired by [25, Theorem 2], we solve this problem. We firstly extend the concepts of nondegeneration and weak sharp minima and, for the solution set of the problem , establish the definitions of generalized nondegeneration and generalized weak sharp minima. Based on these two generalized concepts, we prove the following main results:

(1) the necessary and sufficient condition of a feasible solution sequence to converge finitely to the solution set for the problem is that the corresponding sequence of projected gradient converges to zero.

When the feasible solution set of is general closed convex, the calculation of gradient projection is very difficult. However, the calculation of a merit function is much easier, and equivalent to being a point. Based on this feature of , we characterize the necessary and sufficient condition for the sequence terminating finitely to generalized a nondegeneration point set by global error bound of projected gradient using merit function ; that is,

(2) for the problem , the necessary and sufficient condition for the feasible solution sequence terminating finitely to generalized nondegeneration point set is that the corresponding sequence of merit function converges to zero.

For generalized weak sharp minima, we

(3) suppose the stationary point set of the nonconvex optimization problem is convex; then the necessary and sufficient condition for the feasible solution sequence terminating finitely to a generalized weak sharp minima stationary point set is that the corresponding sequence of projected gradient converges to zero. As a straightforward corollary of the result, we obtain [16, Theorem 2]; therefore, we extend [20, Theorem 4.7] to nonconvex optimization problems.

The rest of this paper is organized as follows. In Section 2, we introduce some concepts and symbols which shall be used in the following discussion. In Section 3, we develop some basic properties of and obtain a global error bound for the projected gradient and a local error bound for the distance from a feasible solution to the optimal solution set of problem by using . Finally, in Section 4, we give the necessary and sufficient condition for the feasible solution sequence terminating finitely to the sets of generalized nondegenerate and weak sharp minima, respectively, and give a number of meaningful conclusions.

2. Definitions and Notation

Let and stand for the standard Euclidean norm and inner product in , respectively. Denote by , and the stationary points set, points set, and (global) optimal solutions set of problem , respectively. In view of the assumption given in the matrix , we know that the optimal solution of the subproblem is unique; say . For simplicity, we denote this optimal solution by , and the confusion can be eliminated from the context.

Given a nonempty subset of , its closure set is denoted by and the polar cone is defined as The tangent cone of at is given by The normal cone of at is defined as . In particular, if is convex, then the tangent and normal cone of at take the following form, respectively:

The projection of a point onto a closed set is defined by and the distance from to is given by Fixed vector and a nonnegative scalar , we use notation to mean that , where stands for the unit ball in .

A mapping is said to be a projected gradient of with respect to a set if Clearly, is a stationary point of if and only if or, equivalently, . Since is convex, it is easy to see that and . Clearly, we have for convex optimization problem.

For the solution set of optimization problem , Burke and Ferris [20] extended the concept of strong singleton minima point to weak sharp minima, which can be used to deal with the case where the solution set is a nonsingleton point set. More precisely, a set is said to be weak sharp minima if there exists a positive which depends only on , , and , such that

We call that a sequence terminates finitely to if there exists such that for all . Given , the level set of is defined as

3. Error Bound

3.1. Properties of

This subsection mainly deals with the basic properties of the merit function . From the definition of subproblem , we know that for all . Since is polyhedral, we have the following.

Lemma 1. Given , a point is the unique solution of the subproblem if and only if there exist multipliers for such that where, for simplicity, one uses to denote multipliers associated with the unique optimal solution to the problem .

Lemma 2. For any , one has

Proof. Left-multiplying the two sides of (13) by and using (14), we have By the definition of , we obtain

Lemma 3. For any , one has

With the preparation of these lemmas, we obtain the following result.

Theorem 4. The following conclusions are equivalent:(1)is a point of the problem ;(2) and ;(3) and .

From conclusions (1) and (2) of Theorem 4 and the nonnegativity of , it is immediate to verify the following result.

Corollary 5. Suppose that the point of is nonempty. Then, is a point of problem if and only if is an optimal solution of problem In this case, and .

Lemma 6. For any and , one has

Proof. Clearly, the optimal solutions of problem is the same to that of the following programming problem: Since this is a convex programming, its optimal solutions must be stationary points as well; that is,

3.2. Error Bound

The following result provides an estimation for .

Theorem 7. For any , one has where and .

Proof. Since is a polyhedral set, it is easy to see that the tangent cone of at takes the form as follows: where we have used the fact that . Take with . Left-multiplying both sides of (13) by and using Lemma 3, we obtain
In other words, we obtain where . This, together with the properties of projected gradient by Calamai and Moré [26], implies that Thus, the right inequality is proved.
On the other hand, Lemma 3 implies that from which we obtain Let . Note that From (29) and (30), we obtain So, the left inequality is valid.

The following result can be obtained by Theorems 4 and 7.

Corollary 8. The following conclusions are equivalent:(1) is a point of the problem ;(2) and .

A global error bound for is given below.

Theorem 9. For any , one has If is polyhedral, then where and .

Proof. Since and , it follows from the definition of tangent cone that Taking into account of the properties of the projected gradient and Theorem 7, we obtain If is polyhedral, then ; that is, . Thus, the result is established by invoking Theorem 7.

Now, we consider the case where the problem is convex; that is, the functions involved in are all convex. Obviously, in this case, is a closed convex set. Under condition (11), we use to give a local error bound for .

Theorem 10. Suppose that is a convex programming. If is a set of weak sharp minima; that is, (11) holds, then there exist positive constants , , and such that

Proof. Given any , let . Taking into account Lemma 6, we have By (11), (37), the convexity of , and the assumption of , we have According to the last inequality above and Lemma 3, which in turn implies that Let , and . When , it follows from (40) that

Corollary 11. Suppose that is a convex programming. If (11) holds and satisfies , then

Proof. For the positive given in Theorem 10, it follows from that there exists such that for all ; that is, . The result then follows readily from Theorem 10.

4. Finite Termination

In this section, we will study the necessary and sufficient conditions for the feasible solution sequence of nonconvex optimization problems terminating finitely to their solution sets (stationary points set and points set).

4.1. Generalized Nondegenerate Set

First we introduce the concept of nondegenerate set.

Definition 12. Let ; if is said to be a nondegenerate set.

Now, we further extend the definition of the nondegeneration. Let be an infinite subset.

Definition 13. Let and . is said to be a generalized nondegenerate set for , if, for every subsequence , there exists and such that(i); (ii).

It is easy to verify that the following several propositions expressed are special cases of generalized nondegenerate set.

Proposition 14. Let be a nondegenerate set, and let be a bounded sequence, with its every accumulation point . Then, is a generalized nondegenerate set for .

Proposition 15. Let , and be bounded. If, for every accumulation point of , there exists such that then is a generalized nondegenerate set for .

From [20, Theorem 3.1], we know that gradient is always constant vector in the optimal solution set of a convex optimization problems. So, according to the monotonicity of , we get the following.

Proposition 16. Let , the optimal solution set of a convex optimization problems , be a nondegenerate set. Then, for any , is a generalized nondegenerate set for .

Here, we give the necessary and sufficient condition for the feasible solution sequence of the nonconvex optimization problems terminating finitely to its generalized nondegenerate solution set.

Theorem 17. For the nonconvex optimization problem , let , the solution set ( or ), be a generalized nondegenerate set. Then, terminates finitely to solution set, if and only if

Proof. Consider the following.
Necessity. If , by the definition of stationary point, we have . If , for , we also have .
Sufficiency. We only give the proof for . The proof for is the same as that for . Suppose (45) holds. We prove the result by contradiction. Suppose on the contrary that there exists an infinite subsequence such that Since is a generalized nondegenerate for , so, according to Definition 13, for subsequence and , such that From (47), we know that such that where stands for the unit ball in . By (49), for , we have that is, Let , and by (46) we have . Since is convex, we have Then, according to (51), (52), and the property of gradient projection, we have for all It follows that Since , according to (45) (48), we have which is a contradiction with .

Corollary 18. For the the nonconvex optimization problems , let be a bounded sequence and let be a nondegenerate set. Then, terminates finitely to , if and only if (45) holds.

Proof. The Necessity is obvious. We only need to prove the Sufficiency. Suppose (45) holds; that is, According to the lower semicontinuity of and the boundedness of , we have that every accumulation point of is in . Therefore, from Proposition 14, we know that is a generalized nondegenerate set for . Then, according to (45), by Theorem 17, sufficiency holds.

Corollary 19. For the nonconvex optimization problem , let , and is a bounded sequence, and, for each of its accumulation point , there exists (or ) such that Then, terminates finitely to the solution set (or ) if and only if (45) holds.

Proof. By Proposition 15 and Theorem 17, we can get it.

Corollary 20. For the nonconvex optimization problem , let , and is a nondegenerate set. Then, terminates finitely to , if and only if (45) holds.

Proof. Here, we have . Then, by Proposition 16 and Theorem 17, we can get it.

In the following, we will use the global error bounds of projected gradient, which resulted from last section, to characterize the necessary and sufficient condition of feasible solution sequence terminating finitely by the merit function which is easy to calculate.

Theorem 21. For the nonconvex optimization problem , , and points set is a generalized nondegenerate set for . Then, terminates finitely to , if and only if

Proof. Consider the following.
Necessity. If , then, according to Theorem 4, we have .
Sufficiency. Suppose (58) holds, and according to Theorem 9, we have ; that is, (45) holds. From Theorem 17, we get the sufficiency.

Lemma 22. For the nonconvex optimization problem , let , and (58) holds. If is an accumulation point of satisfying the Mangasarian-Fromovitz constraint qualification, then is a point of .

Proof. Without loss of generality, we assume that Then, there exists an index set and an infinite subsequence such that The tangent cone of at is and its normal cone is Therefore, by (60) and (62), when , there exists for such that from which, and using the orthogonal projection decomposition, for all , we get
We now show that the sequence must be bounded. Suppose on the contrary that Let us suppose Dividing the two sides of (64) by and taking limit, due to (65) and (66), with the continuity of , we obtain and, according to (66), we have
According to the assumption, satisfies the constraint qualification; that is, there is such that Since , so (69), (67), and (68) are contradictory. So we assume that for . By (58) and Theorem 7, we know that . Then, by (64) and the continuity of , we obtain which means that is a point of the .

Corollary 23. For the nonconvex optimization problem , suppose is a nondegenerate set, is bounded, and each of its accumulation point is satisfying the Mangasarian-Fromovitz constraint qualification. Then, terminates finitely to if and only if the sequence satisfies (58).

Proof. According to Lemma 22 and Proposition 14, by Theorem 21 we can get it.

Corollary 24. For the nonconvex optimization problem , suppose , and is a bounded sequence, and, for each of its accumulation point , there exists such that Then, terminates finitely to if and only if the sequence satisfies (58).

Proof. According to Proposition 15, by Theorem 21 we can get it.

4.2. Generalized Weak Sharp Minima

In [20], Burke and Ferris gave an equivalent condition for weak sharp minima for a convex optimization problem; that is, the optimal solution set of the convex optimization problem is a set of weak sharp minima if and only if for every

Generally speaking, for a nonconvex optimization problem, (72) is only a necessary condition for weak sharp minima condition (11); that is, (72) is weaker than (11). However, considering its importance in analyzing the finite termination of algorithms, some previous studies, for example, [21, 22], directly use (72) as the definition of weak sharp minima for the optimal solution set. In this paper, we will use the weaker condition (72) to define the set of weak sharp minima.

Definition 25. Let , and is said to be a set of weak sharp minima if and only if (72) holds.

Now, we further extend the definition of weak sharp minima as follows.

Definition 26. Let and . is said to be a set of generalized weak sharp minima for , if, for every subsequence , there exists such that(i);(ii).

With the same as the set of generalized nondegeneration, it is easy to verify that the following several propositions expressed are special cases of generalized weak sharp minima.

Proposition 27. Let be a set of weak sharp minima, and let be a bounded sequence, with its every accumulation point . Then, is a set of generalized weak sharp minima for .

Proposition 28. Let , and be bounded. If, for every accumulation point of , then, is a set of generalized weak sharp minima for .

Proposition 29. Let the optimal solution set of a convex optimization problem be a set of weak sharp minima. Then, for any , is a set of generalized weak sharp minima for .

Here, we give the necessary and sufficient condition for the feasible solution sequence of a nonconvex optimization problem terminating finitely to its solution set of generalized weak sharp minima.

Theorem 30. For the nonconvex optimization problem , let , and its stationary point set is convex and is a set of generalized weak sharp minima for . Then, terminates finitely to , if and only if (45) holds.

Proof. Consider the following.
Necessity. If , by the definition of stationary point, we have .
Sufficiency. Suppose (45) holds. We prove the result by contradiction. Suppose on the contrary that there exists an infinite subsequence such that So, according to Definition 26 (i), we know that, for subsequence and , such that, for , we have By (74), for and , we have Let and . By (74), we have , . Since is convex, we have Since is convex, according to the property of projection and the definition of , we have , so by (77), we have Then, according to (76), (77), (78), and the property of gradient projection, we have It follows that Since , according to (45) and Definition 26 (ii), we have which is a contradiction with .

With the same as generalized nondegeneration, according to Propositions 27, 28, and 29 and by Theorem 30, we can get the following corollaries.

Corollary 31. For the nonconvex optimization problem , suppose is convex and weak sharp minima, and is an bounded sequence. Then, terminates finitely to , if and only if (45) holds.

Corollary 32. For the nonconvex optimization problem , suppose is convex and ; is a bounded sequence, and for each of its accumulation point , we have Then, terminates finitely to , if and only if (45) holds.

Corollary 33. For the nonconvex optimization problem , the optimal solution set is weak sharp minima, . Then, terminates finitely to , if and only if (45) holds.

Notation. Corollary 33 is [25, Theorem 2], which simplifies Theorem 4.7 in Burke and Ferris [20] (which removes two assumptions of [20, Theorem 4.7]; that is, and is uniformly continuous on an open set containing ); therefore, our Theorem 30 extends [20, Theorem 4.7] to nonconvex optimization problems.

Conflict of Interests

The authors declared that there is no conflict of interests in their submitted paper.

Acknowledgments

This research was supported by National Natural Science Foundations of China (nos. 11271233 and 10971118) and Natural Science Foundation of Shandong Province (no. ZR2012AM016).