Abstract

This paper is a counterpart of Bi et al., 2011. For a locally optimal solution to the nonlinear second-order cone programming (SOCP), specifically, under Robinson’s constraint qualification, we establish the equivalence among the following three conditions: the nonsingularity of Clarke’s Jacobian of Fischer-Burmeister (FB) nonsmooth system for the Karush-Kuhn-Tucker conditions, the strong second-order sufficient condition and constraint nondegeneracy, and the strong regularity of the Karush-Kuhn-Tucker point.

1. Introduction

The nonlinear second-order cone programming (SOCP) problem can be stated as where , , and are given twice continuously differentiable functions, and is the Cartesian product of some second-order cones, that is, with and being the second-order cone (SOC) in defined by By introducing a slack variable to the second constraint, the SOCP (1) is equivalent to In this paper, we will concentrate on this equivalent formulation of problem (1).

Let be the Lagrangian function of problem (4) and denote by the normal cone of at in the sense of convex analysis [1]: Then the Karush-Kuhn-Tucker (KKT) conditions for (4) take the following form: where is the derivative of at with respect to . Recall that is an SOC complementarity function associated with the cone if With an SOC complementarity function associated with , we may reformulate the KKT optimality conditions in (7) as the following nonsmooth system:

The most popular SOC complementarity functions include the vector-valued natural residual (NR) function and Fischer-Burmeister (FB) function, respectively, defined as where is the projection operator onto the closed convex cone , means the Jordan product of and itself, and denotes the unique square root of . It turns out that the FB SOC complementarity function enjoys almost all favorable properties of the NR SOC complementarity function (see [2]). Also, the squared norm of induces a continuously differentiable merit function with globally Lipschitz continuous derivative [3, 4]. This greatly facilitates the globalization of the semismooth Newton method [5, 6] for solving the FB nonsmooth system of KKT conditions:

Recently, with the help of [7, Theorem 30] and [8, Lemma 11], Wang and Zhang [9] gave a characterization for the strong regularity of the KKT point of the SOCP (1) via the nonsingularity study of Clarke's Jacobian of the NR nonsmooth system They showed that the strong regularity of the KKT point, the nonsingularity of Clarke's Jacobian of at the KKT point, and the strong second-order sufficient condition and constraint nondegeneracy [7] are all equivalent. These nonsingularity conditions are better structured than those of [10] for the nonsingularity of the -subdifferential of the NR system. Then, it is natural to ask the following: is it possible to obtain a characterization for the strong regularity of the KKT point by studying the nonsingularity of Clarke's Jacobian of . Note that up till now one even does not know whether the -subdifferential of the FB system is nonsingular or not without the strict complementarity assumption.

In this work, for a locally optimal solution to the nonlinear SOCP (4), under Robinson's constraint qualification, we show that the strong second-order sufficient condition and constraint nondegeneracy introduced in [7], the nonsingularity of Clarke's Jacobian of at the KKT point, and the strong regularity of the KKT point are equivalent to each other. This, on the one hand, gives a new characterization for the strong regularity of the KKT point and, on the other hand, provides a mild condition to guarantee the quadratic convergence rate of the semismooth Newton method [5, 6] for the FB system. Note that parallel results are obtained recently for the FB system of the nonlinear semidefinite programming (see [11]); however, we do not duplicate them. As will be seen in Sections 3 and 4, the analysis techniques here are totally different from those in [11]. It seems hard to put them together in a unified framework under the Euclidean Jordan algebra. The main reason causing this is due to completely different analysis when dealing with the Clarke Jacobians associated with FB SOC complementarity function and FB semidefinite cone complementarity function.

Throughout this paper, denotes an identity matrix of appropriate dimension,    denotes the space of -dimensional real column vectors, and is identified with . Thus, is viewed as a column vector in . The notations , , and denote the interior, the boundary, and the boundary excluding the origin of , respectively. For any , we write (resp., ) if (resp., ). For any given real symmetric matrix , we write (resp., ) if is positive semidefinite (resp., positive definite). In addition, and denote the derivative and the second-order derivative, respectively, of a twice differentiable function with respect to the variable .

2. Preliminary Results

First we recall from [12] the definition of Jordan product and spectral factorization.

Definition 1. The Jordan product of is given by

Unlike scalar or matrix multiplication, the Jordan product is not associative in general. The identity element under this product is , that is, for all . For each , we define the associated arrow matrix by Then it is easy to verify that for any . Recall that each admits a spectral factorization, associated with , of the form where and are the spectral values and the associated spectral vectors of , respectively, with respect to the Jordan product, defined by with if and otherwise being any vector in satisfying .

Definition 2. The determinant of a vector is defined as , and a vector is said to be invertible if its determinant is nonzero.

By the formula of spectral factorization, it is easy to compute that the projection of onto the closed convex cone , denoted by , has the expression Define . Then, using the expression of , it follows that

The spectral factorization of the vectors , , and the matrix have various interesting properties (see [13]). We list several properties that we will use later.

Property 3. For any with spectral factorization (15), we have the following.(a).(b)If , then and .(c)If , then and is invertible with (d) (resp., ) if and only if (resp., ).

The following lemma states a result for the arrow matrices associated with and , which will be used in the next section to characterize an important property for the elements of Clarke's Jacobian of at a general point.

Lemma 4. For any given and , if , then where means the spectral norm of a real matrix . Consequently, it holds that

Proof. Let . From [13, Proposition 3.4], it follows that This shows that , and the first part follows. Note that, for any , By letting , we immediately obtain the second part.

The following two lemmas state the properties of with which are often used in the subsequent sections. The proof of Lemma 5 is given in [3, Lemma 2].

Lemma 5. For any with , one has

Lemma 6. For any , let . (a)If , then for any , it holds that (b)If , then the following four equalities hold and consequently the expression of can be simplified as

Proof. (a) The result is direct by the equalities of Lemma 5 since .
(b) Since , we must have . Using Lemma 5, and , we easily obtain the first part. Note that . Using Property 3(b) and Lemma 5 yields (27).

When satisfies the complementary condition, we have the following result.

Lemma 7. For any given , if and , then there exists a constant such that and .

Proof. Since , we have that and , and consequently, This means that there exists such that , and then .

Next we recall from [14] the strong regularity for a solution of generalized equation where is a continuously differentiable mapping from a finite dimensional real vector space to itself, is a closed convex set in , and is the normal cone of at . As will be shown in Section 4, the KKT condition (7) can be written in the form of (29).

Definition 8. We say that is a strongly regular solution of the generalized equation (29) if there exist neighborhood of the origin and of such that for every , the linearized generalized equation has a unique solution in , denoted by , and the mapping is Lipschitz continuous.

To close this section, we recall from [15] Clarke's (generalized) Jacobian of a locally Lipschitz mapping. Let be an open set and a locally Lipschitz continuous function on . By Rademacher's theorem, is almost everywhere (réchet)-differentiable in . We denote by the set of points in where is -differentiable. Then Clarke's Jacobian of at is defined by , where “conv” means the convex hull, and -subdifferential , a name coined in [16], has the form For the concept of (strong) semismoothness, please refer to the literature [5, 6].

Unless otherwise stated, in the rest of this paper, for any   , we write , where is the first component of and is a column vector consisting of the remaining entries of . For any , let

3. Directional Derivative and -Subdifferential

The function is directionally differentiable everywhere by [2, Corollary 3.3]. But, to the best of our knowledge, the expression of its directional derivative is not given in the literature. In this section, we derive its expression and then prove that the -subdifferential of at a general point coincides with that of its directional derivative function at the origin. Throughout this section, we assume that .

Proposition 9. For any given , the directional derivative of at with the direction has the following form. (a)If , then .(b)If , then .(c)If , then where , and is defined by

Proof. Part (a) is immediate by noting that is a positively homogeneous function. Part (b) is due to [13, Proposition 5.2]. We next prove part (c) by two subcases as shown in the following. In the rest of proof, we let with denote the spectral values of . Since , we have , and from Lemma 6(b) it follows that
(c.1): for sufficiently small . In this case, from Lemma 6(b), we know that has the following expression: Let be the first element of and the vector consisting of the rest components of . By the above expression of , where the last equality is using by Lemma 5. The above two limits imply
(c.2): for sufficiently small . Let with the spectral values . An elementary calculation gives Also, since , applying the Taylor formula of at and Lemma 6(a) yields Now using the definition of and noting that and , we have that which in turn implies that We first calculate . Using (38) and (40), it is easy to see that and consequently, We next calculate . Since , using (38)-(39) and Lemma 6(a), Using and (40), we simplify the sum of the first two terms in (45) as Then, from (45) and , we obtain that We next make simplification for the numerator of the right hand side of (47). Note that Therefore, adding the last two equalities and using Lemma 5 yield that Combining this equality with (47) and using the definition of in (33), we readily get We next calculate . To this end, we also need to take a look at . From (38)-(39) and (40), it follows that Together with (44) and (50), we have that where the last equality is using and . Combining with (42), (44), and (50), a suitable rearrangement shows that has the expression (32).
Finally, we show that, when for sufficiently small , the formula in (32) reduces to the one in (37). Indeed, an elementary calculation yields This implies that if for sufficiently small , that is, for sufficiently small , then , and hence . Thus, in (32) can be simplified as where the equality is using . The proof is complete.

As a consequence of Proposition 9, we have the following sufficient and necessary characterizations for the (continuously) differentiable points of and .

Corollary 10. The function is (continuously) differentiable at if and only if , where is defined in (31). Also, when , one has
The function is (continuously) differentiable at if and only if is invertible. Also, when is invertible, .

Proof. (a) The “if” part is direct by [13, Proposition 5.2]. We next prove the “only if” part. If is differentiable at , then is a linear function of , which by Proposition 9 holds only if . The formula of is given in [13].
(b) Since , by part (a) is (continuously) differentiable at if and only if , which is equivalent to requiring that is invertible since always holds. When is invertible, the formula of follows from part (a).

For any given with , define for any , and let . Then, comparing with (33), we can rewrite the function as Note that the Euclidean norm is globally Lipschitz continuous and strongly semismooth everywhere in , and is a linear function. Then, (57) implies that is globally Lipschitz continuous and strongly semismooth everywhere in by [17, Theorem 19]. Also, the function is differentiable at if and only if . The following lemma characterizes the -subdifferential of the function at the origin.

Lemma 11. For any given with , let be defined by (33). Then, the B-subdifferential of the function at takes the following form:

Proof. Let . By the definition of the elements in , there exists a sequence in converging to with such that By (57), a simple computation shows that such belongs to the set on the right hand side of (58). Thus, is included in the set on the right hand side of (58). In fact, and in (58) are the limit points of and , respectively.
Conversely, let be an arbitrary element of the set on the right hand side of (58). Then, there exists a with such that Take the sequence with and . Clearly, as . Also, by Lemma 5, it is easy to verify that This shows that and . Hence, . Thus, the set on the right hand side of (58) is included in .

Now we may prove the equivalence between the -subdifferential of at a general point and that of its directional derivative function at . This result corresponds to that of [18, Lemma 14] established for the NR SOC function .

Lemma 12. For any given , let . Then,

Proof. The result is direct by Proposition 9(a)-(b) and Lemma A.1 in the Appendix.

Using Lemma 12, we may present an upper estimation for Clarke's Jacobian of at the point with , which will be used in the next section.

Proposition 13. For any given with , one has where and are real symmetric matrices defined as follows:

Proof. We first make simplifications for the last two terms in (32) by . Note that where the last equality is using . Therefore, from (32), we have Now, applying Lemma 12, we immediately obtain that where, by Lemma 11 and the definition of Clarke's Jacobian,
Let with . Then, it suffices to prove that such and satisfy all inequalities and equalities in (63). By (68), there exists a vector with such that Using Lemma 5, it is immediate to obtain that This means that . Similarly, we also have . We next prove that . By Lemma 5, it is easy to verify that Using the two equalities, it is not hard to calculate that Similarly, we also have . In addition, we have that A similar argument also yields . The last four equalities in (63) are direct by Lemma 5 and the expression of , , , and .

To close this section, we establish a relation between the -subdifferential of at a complementarity point pair and that of at the corresponding point pair.

Lemma 14. Let satisfy , and . Then,

Proof. Since satisfies , and , there exist spectral vectors and nonnegative real numbers and such that Indeed, if or , then the statement clearly holds. If and , then the condition that , , and implies . From Lemma 7, there exists such that and . Together with the spectral factorizations of and , the conclusion in (75) follows. Since and , using (75) and yields that . This, along with the nonnegativity of and , implies By the definition of , we have for , where Comparing with the definition of , we only need to prove the following inclusion: For this purpose, let . By the definition of the elements in and Corollary 10(b), there exists a sequence converging to with invertible such that For each , let and . Then, using and (76), we have that and as . Also, . By Corollary 10(a), the function is continuously differentiable at with Together with (79), we have that . This shows that , and the inclusion in (78) follows. The proof is complete.

4. Nonsingularity Conditions

This section studies the nonsingularity of Clarke's Jacobian of at a KKT point. Let be a KKT point of the SOCP (4), that is, Taking into account that if and only if and satisfy we introduce the following index sets associated with and : From [19], we learn that the above six index sets form a partition of .

First of all, let us take a careful look at the properties of the elements in for , as stated in the following. The proof of Lemma 15 is given in the Appendix.

Lemma 15. For satisfying (82), let for each . Then, (a)when , there exists an orthogonal matrix with such that , where and take one of the forms with satisfying ,(b)when , there exists an orthogonal matrix with such that , where and take one of the forms with , satisfying .

The following proposition plays a key role in achieving the main result of this section.

Proposition 16. For satisfying (82), let for . Then, for any , it holds that Particularly, for each , the following implication also holds:

Proof. Throughout the proof, let and for . We prove the conclusion by discussing five cases as shown in the following arguments.
Case 1 . In this case . From Corollary 10(a), it follows that the function is continuously differentiable at . Therefore, Then, and . Together with , we get .
Case 2 . Using the same arguments as in Case 1 readily yields .
Case 3 . Now and . By (82) and Lemma 7, there exists such that and , and consequently, From Corollary 10(a), is continuously differentiable at , and hence Thus, implies that , and consequently, Making an inner product with , we have from and Lemma 5 that Together with (94), we have and .
We next prove the implication in (90). By the expressions of , and , where the second equivalence is due to and . Since the rank of is , the dimension of the solution space for the system equals . Note that is a nonzero solution of this linear system. Therefore, Making an inner product with for the equality and using , we get Using the similar arguments as above and noting that , we may obtain The last two equalities show that the implication in (90) holds.
Case 4 . By Lemma 15(b), there exists an orthogonal matrix such that and , where and are given by (86), and and take one of the form in (87)-(88). Thus, we have When and , we have , and then . When and take the form in (87), we have . Consequently, where the last equality is using the definition of . When and take the form in (88), we also have . Using the same arguments as above, we have that has the form of (102). Thus, we prove that .
Case 5 . Using Lemma 15(a) and following the same arguments as in Case 4, the result can be checked routinely. So, we omit the proof.

The following lemma states an important property for the elements of Clarke's Jacobian of at a general point, which will be used to prove Proposition 18.

Lemma 17. For any given , let . Then,

Proof. Since , by Carathéodory's theorem, there exist a positive integer and for such that where , and . For each , by Corollary 10(a) and the definition of the elements in , there exists a sequence in converging to with such that Consequently, for any , we have that From the continuity and convexity of the Euclidean norm and Lemma 4, we get Now assume that . Then, by the last inequality, we get the result.

Proposition 18. For satisfying (82), let . Then it holds that where, for any given , is the linear-quadratic function:

Proof. Fix any . Since and with for , we may rewrite as From Lemma 17 and (90) of Proposition 16, it then follows that In addition, by the definition of , for all since , and for since . This means that From the above two equations, we immediately obtain the desired result.

Before stating the main result of this section, we also need to recall several concepts, including constraint nondegeneracy, Robinson's constraint qualification (CQ) (see [20]), and the strong second-order sufficient condition introduced in [7, Theorem 30]. To this end, let and define by Then, we may rewrite the nonlinear SOCP (4) succinctly as follows:

Definition 19. A feasible vector of (4) is called constraint nondegenerate if where is the largest linear space of , that is, .

Definition 20. Robinson's CQ is said to hold at a feasible solution to (4) if which, since is a closed convex set in , can be equivalently written as

Clearly, the constraint nondegenerate condition (114) implies Robinson's CQ (116). If is a locally optimal solution to (4) and Robinson's CQ holds at , then there exists a Lagrange multiplier , together with , satisfying the KKT conditions: In the sequel, we let denote the set of Lagrange multipliers satisfying (117).

Let be a KKT point of the SOCP (4). From [7, Lemma 25], it follows that the tangent cone of at takes the form of which implies that the largest linear space in has the following form: We next recall the critical cone of problem (4) at a feasible which is defined as The critical cone represents those directions for which the linearization of (4) does not provide any information about optimality of and is very important in studying second-order optimality conditions. Particularly, if the set of Lagrange multipliers at the point is nonempty, then can be rewritten as where and means the orthogonal complementarity space of . Now let . Then, using and the expression of , we have that

Definition 21. Let be a stationary point of the SOCP (4) such that . We say that the strong second-order sufficient condition holds at if where with defined by and denotes the affine hull of and is now equivalent to the span of :

Now we are in a position to prove the nonsingularity of Clarke's Jacobian of under the strong second-order sufficient condition and constraint nondegeneracy.

Proposition 22. Let be a KKT point of the nonlinear SOCP (4). Suppose that the strong second-order sufficient condition (123) holds at and is constraint nondegenerate, then any element in is nonsingular.

Proof. Since the nondegeneracy condition (114) is assumed to hold at , from [21, Proposition 4.75], we know that . Then, by the definition of and , the strong second-order sufficient condition (123) takes the following form: where and with Let be an arbitrary element in . To prove that is nonsingular, let such that From the expression of , we know that there exists a such that where and with for all . The last system can be simplified as By the second and the third equations of (130) and (89), we get Comparing with the definition of in Definition 21, it follows that From the first and the second equations of (130), it is not hard to verify which, together with the third equation of (130) and Proposition 18, implies that This, together with (132) and (126), yields that . Thus, (130) reduces to From the second equation of (135), we have for . In addition, by the arguments for Case 3 of Proposition 16, for , and so for . Since for , has the two single eigenvalues and as well as the eigenvalues with multiplicity , and is the eigenvector corresponding to eigenvalue . Thus, from , we deduce By the second equation of (135), we use Proposition 16 with , to yield Using the constraint nondegeneracy condition (114), we know that there exist a vector and a vector such that Since , from (119), it follows that Combining the last four equations with the first equation of (135), we obtain Thus, and . Along with , we show that is nonsingular.

Note that if and only if . The KKT conditions in (7) can be equivalently written as the following generalized equation which is clear in the form of the generalized equation given by (29). Now using Proposition 22 and [9, Theorem 3.1], we may establish the main result of this paper, which states that Clarke's Jacobian of at a KKT point is nonsingular if and only if the KKT point is a strongly regular solution to the generalized equation (141).

Theorem 23. Let be a locally optimal solution to the nonlinear SOCP (4). Suppose that Robinson's CQ holds at this point. Let be such that is a KKT point of (4). Then the following statements are equivalent.(a)The strong second-order sufficient condition in Definition 21 holds at and is constraint nondegenerate.(b)Any element in is nonsingular.(c)Any element in is nonsingular.(d) is a strongly regular solution of the generalized equation (141).

Proof. First, Lemma 14 and the definition of and imply the following inclusion: Using this inclusion and Proposition 22, we have that (a) (b) (c). Since the SOCP (4) is obtained from (1) by introducing a slack variable, we know from [9, Theorem 3.1] that (a) (c) (d). Thus, we complete the proof of this theorem.

5. Conclusions

In this paper, for a locally optimal solution of the nonlinear SOCP, we established the equivalence between the nonsingularity of Clarke's Jacobian of the FB system and the strong regularity of the corresponding KKT point. This provides a new characterization for the strong regularity of the nonlinear SOCPs and extends the result of [22, Corollary 3.7] for the FB system of variational inequalities with the polyhedral cone constraints to the setting of SOCs. Also, this result implies that the semismooth Newton method [5, 6] applied to the FB system is locally quadratically convergent to a KKT point under the strong second-order sufficient condition and constraint nondegeneracy. We point it out that we have also established parallel (not exactly the same) results for SDP case in [11] recently. However, it seems hard to put them together in a unified framework under Euclidean Jordan algebra. The main reason causing this is due to that the analysis and techniques are totally different when dealing with the Clarke Jacobians associated with FB SOC complementarity function and FB SDP complementarity function.

Appendix

Lemma A.1. For any given , with , it holds that

Proof. We first prove that . Let . By the formula (32) and Lemma 11, there exists a vector with such that and defined by (60) satisfy where and are defined by (64). Take the sequences and with Let . By Lemma 5, a simple computation yields that Clearly, . From Corollary 10(a), it then follows that Let . Using the formula (19), we have that , where Since , and as , it follows that where the last two equalities are using Lemma 5. In addition, we compute that where are defined as follows: Together with the definition of and and (A.4), we can verify that Thus, the above arguments show that Comparing this with (A.2), we have . So, .
In what follows, we show that . Note that is equivalent to and , which is equivalent to Hence, is equivalent to saying that satisfy This means that must satisfy one of the following cases: (i) for some and ; (ii) for some and . Since and are symmetric with respect to two arguments, we only need to prove one of the two cases. In the following arguments, we assume that for some and . Noting that since , we without loss of generality assume that . From (32) and Lemma 11, it is not hard to see that Let . By the definition of the elements in , there exists a sequence with converging to such that . From the arguments for the first part, we know that with and defined by (A.9). Thus, in order to prove that , it suffices to argue that the following limits: hold for some with . We proceed the arguments by two steps.
Step 1. To prove (taking a subsequence if necessary). For each , by the expressions of , and , it is easy to see that where and . An elementary computation yields that From the last two equations, we immediately obtain that This shows that, in order to achieve the result, it suffices to prove that which is equivalent to arguing that with An elementary computation yields that By the expressions of and , we compute that
Substep 1.1  . Since and , we have for sufficiently large . In addition, from (A.24), it is not difficult to obtain that Together with (A.23) and , we have that Taking the limit to the inequality, we obtain the limit in (A.21).
Substep 1.2  . Since and , we have for sufficiently large . Now, from (A.24), it is easy to obtain that Together with (A.23) and , it follows that Taking the limit to this inequality, we readily obtain the limit in (A.21).
Substep 1.3  . Now we must have or for sufficiently large . Then, using the same arguments as in Substepss 1.1 and 1.2, we get the limit in (A.21).
Let and with and defined by (A.9).
Step 2. To prove that . By the expression of and , it is easy to verify that and , which implies that By (A.29), we can verify that is equivalent to . From Step 1, we have that . This implies that, to prove that , it suffices to argue that Indeed, noting that , we readily have that
Now let and . Clearly, . Also, using the results of Steps 1 and 2 and (A.29), we can verify that This, along with (A.15), implied that . The result then follows.

Proof of Lemma 15. Let with for .
(a) Since and , we have . By Proposition 13, there exist some and such that where It is not hard to verify that the matrix has eigenvalue of multiplicity and a single eigenvalue , with the corresponding eigenvectors being , for , and , respectively, where are any unit vectors that span the linear subspace . Let . Then such is an orthogonal matrix satisfying Together with (A.33), we obtain and , where Using the equalities in (A.34) yields , and with . Along with , we have that
Since , and , there are exactly three cases for the vectors and satisfying (A.34): (1)  , ; (2)  , ; (3)  , . We next proceed the arguments by the three cases.
Case  1  . Now we have . From the equality in (A.34) and , we deduce that , and hence . In addition, from the last two inequalities of (A.34), , which together with implies . Now plugging into (A.37) yields and . Therefore, can be taken as an identity matrix.
Case  2  . Under this case, since and , using the same arguments as in Case then yields and . Now plugging into (A.37), and become the one given by (85).
Case  3   . By the expressions of and , we calculate that Since , the definition of the elements in and the proof of [3, Lemma 6(b)] imply that , and hence . Thus, the zero diagonals imply , and and have the expression of (86).
(b) In view of the symmetry of and in and , the results readily follow by using similar arguments as in part (a). Thus, we complete the proof.

Acknowledgments

This work was supported by National Young Natural Science Foundation (no. 10901058) and Guangdong Natural Science Foundation (no. 9251802902000001), the Fundamental Research Funds for the Central Universities, and Project of Liaoning Innovative Research Team in University WT2010004. The author’s work is supported by National Science Council of Taiwan, Department of Mathematics, National Taiwan Normal University, Taipei, Taiwan 11677.