Abstract

This paper is a counterpart of Bi et al., 2011. For a locally optimal solution to the nonlinear second-order cone programming (SOCP), specifically, under Robinson’s constraint qualification, we establish the equivalence among the following three conditions: the nonsingularity of Clarke’s Jacobian of Fischer-Burmeister (FB) nonsmooth system for the Karush-Kuhn-Tucker conditions, the strong second-order sufficient condition and constraint nondegeneracy, and the strong regularity of the Karush-Kuhn-Tucker point.

1. Introduction

The nonlinear second-order cone programming (SOCP) problem can be stated as where , , and are given twice continuously differentiable functions, and is the Cartesian product of some second-order cones, that is, with and being the second-order cone (SOC) in defined by By introducing a slack variable to the second constraint, the SOCP (1) is equivalent to In this paper, we will concentrate on this equivalent formulation of problem (1).

Let be the Lagrangian function of problem (4) and denote by the normal cone of at in the sense of convex analysis [1]: Then the Karush-Kuhn-Tucker (KKT) conditions for (4) take the following form: where is the derivative of at with respect to . Recall that is an SOC complementarity function associated with the cone if With an SOC complementarity function associated with , we may reformulate the KKT optimality conditions in (7) as the following nonsmooth system:

The most popular SOC complementarity functions include the vector-valued natural residual (NR) function and Fischer-Burmeister (FB) function, respectively, defined as where is the projection operator onto the closed convex cone , means the Jordan product of and itself, and denotes the unique square root of . It turns out that the FB SOC complementarity function enjoys almost all favorable properties of the NR SOC complementarity function (see [2]). Also, the squared norm of induces a continuously differentiable merit function with globally Lipschitz continuous derivative [3, 4]. This greatly facilitates the globalization of the semismooth Newton method [5, 6] for solving the FB nonsmooth system of KKT conditions:

Recently, with the help of [7, Theorem 30] and [8, Lemma 11], Wang and Zhang [9] gave a characterization for the strong regularity of the KKT point of the SOCP (1) via the nonsingularity study of Clarke's Jacobian of the NR nonsmooth system They showed that the strong regularity of the KKT point, the nonsingularity of Clarke's Jacobian of at the KKT point, and the strong second-order sufficient condition and constraint nondegeneracy [7] are all equivalent. These nonsingularity conditions are better structured than those of [10] for the nonsingularity of the -subdifferential of the NR system. Then, it is natural to ask the following: is it possible to obtain a characterization for the strong regularity of the KKT point by studying the nonsingularity of Clarke's Jacobian of . Note that up till now one even does not know whether the -subdifferential of the FB system is nonsingular or not without the strict complementarity assumption.

In this work, for a locally optimal solution to the nonlinear SOCP (4), under Robinson's constraint qualification, we show that the strong second-order sufficient condition and constraint nondegeneracy introduced in [7], the nonsingularity of Clarke's Jacobian of at the KKT point, and the strong regularity of the KKT point are equivalent to each other. This, on the one hand, gives a new characterization for the strong regularity of the KKT point and, on the other hand, provides a mild condition to guarantee the quadratic convergence rate of the semismooth Newton method [5, 6] for the FB system. Note that parallel results are obtained recently for the FB system of the nonlinear semidefinite programming (see [11]); however, we do not duplicate them. As will be seen in Sections 3 and 4, the analysis techniques here are totally different from those in [11]. It seems hard to put them together in a unified framework under the Euclidean Jordan algebra. The main reason causing this is due to completely different analysis when dealing with the Clarke Jacobians associated with FB SOC complementarity function and FB semidefinite cone complementarity function.

Throughout this paper, denotes an identity matrix of appropriate dimension,    denotes the space of -dimensional real column vectors, and is identified with . Thus, is viewed as a column vector in . The notations , , and denote the interior, the boundary, and the boundary excluding the origin of , respectively. For any , we write (resp., ) if (resp., ). For any given real symmetric matrix , we write (resp., ) if is positive semidefinite (resp., positive definite). In addition, and denote the derivative and the second-order derivative, respectively, of a twice differentiable function with respect to the variable .

2. Preliminary Results

First we recall from [12] the definition of Jordan product and spectral factorization.

Definition 1. The Jordan product of is given by

Unlike scalar or matrix multiplication, the Jordan product is not associative in general. The identity element under this product is , that is, for all . For each , we define the associated arrow matrix by Then it is easy to verify that for any . Recall that each admits a spectral factorization, associated with , of the form where and are the spectral values and the associated spectral vectors of , respectively, with respect to the Jordan product, defined by with if and otherwise being any vector in satisfying .

Definition 2. The determinant of a vector is defined as , and a vector is said to be invertible if its determinant is nonzero.

By the formula of spectral factorization, it is easy to compute that the projection of onto the closed convex cone , denoted by , has the expression Define . Then, using the expression of , it follows that

The spectral factorization of the vectors , , and the matrix have various interesting properties (see [13]). We list several properties that we will use later.

Property 3. For any with spectral factorization (15), we have the following.(a).(b)If , then and .(c)If , then and is invertible with (d) (resp., ) if and only if (resp., ).

The following lemma states a result for the arrow matrices associated with and , which will be used in the next section to characterize an important property for the elements of Clarke's Jacobian of at a general point.

Lemma 4. For any given and , if , then where means the spectral norm of a real matrix . Consequently, it holds that

Proof. Let . From [13, Proposition 3.4], it follows that This shows that , and the first part follows. Note that, for any , By letting , we immediately obtain the second part.

The following two lemmas state the properties of with which are often used in the subsequent sections. The proof of Lemma 5 is given in [3, Lemma 2].

Lemma 5. For any with , one has

Lemma 6. For any , let . (a)If , then for any , it holds that (b)If , then the following four equalities hold and consequently the expression of can be simplified as

Proof. (a) The result is direct by the equalities of Lemma 5 since .
(b) Since , we must have . Using Lemma 5, and , we easily obtain the first part. Note that . Using Property 3(b) and Lemma 5 yields (27).

When satisfies the complementary condition, we have the following result.

Lemma 7. For any given , if and , then there exists a constant such that and .

Proof. Since , we have that and , and consequently, This means that there exists such that , and then .

Next we recall from [14] the strong regularity for a solution of generalized equation where is a continuously differentiable mapping from a finite dimensional real vector space to itself, is a closed convex set in , and is the normal cone of at . As will be shown in Section 4, the KKT condition (7) can be written in the form of (29).

Definition 8. We say that is a strongly regular solution of the generalized equation (29) if there exist neighborhood of the origin and of such that for every , the linearized generalized equation has a unique solution in , denoted by , and the mapping is Lipschitz continuous.

To close this section, we recall from [15] Clarke's (generalized) Jacobian of a locally Lipschitz mapping. Let be an open set and a locally Lipschitz continuous function on . By Rademacher's theorem, is almost everywhere (réchet)-differentiable in . We denote by the set of points in where is -differentiable. Then Clarke's Jacobian of at is defined by , where “conv” means the convex hull, and -subdifferential , a name coined in [16], has the form For the concept of (strong) semismoothness, please refer to the literature [5, 6].

Unless otherwise stated, in the rest of this paper, for any   , we write , where is the first component of and is a column vector consisting of the remaining entries of . For any , let

3. Directional Derivative and -Subdifferential

The function is directionally differentiable everywhere by [2, Corollary 3.3]. But, to the best of our knowledge, the expression of its directional derivative is not given in the literature. In this section, we derive its expression and then prove that the -subdifferential of at a general point coincides with that of its directional derivative function at the origin. Throughout this section, we assume that .

Proposition 9. For any given , the directional derivative of at with the direction has the following form. (a)If , then .(b)If , then .(c)If , then where , and is defined by

Proof. Part (a) is immediate by noting that is a positively homogeneous function. Part (b) is due to [13, Proposition 5.2]. We next prove part (c) by two subcases as shown in the following. In the rest of proof, we let with denote the spectral values of . Since , we have , and from Lemma 6(b) it follows that
(c.1): for sufficiently small . In this case, from Lemma 6(b), we know that has the following expression: Let be the first element of and the vector consisting of the rest components of . By the above expression of , where the last equality is using by Lemma 5. The above two limits imply
(c.2): for sufficiently small . Let with the spectral values . An elementary calculation gives Also, since , applying the Taylor formula of at and Lemma 6(a) yields Now using the definition of and noting that and , we have that which in turn implies that We first calculate . Using (38) and (40), it is easy to see that and consequently, We next calculate . Since , using (38)-(39) and Lemma 6(a), Using and (40), we simplify the sum of the first two terms in (45) as Then, from (45) and , we obtain that We next make simplification for the numerator of the right hand side of (47). Note that Therefore, adding the last two equalities and using Lemma 5 yield that Combining this equality with (47) and using the definition of in (33), we readily get We next calculate . To this end, we also need to take a look at . From (38)-(39) and (40), it follows that Together with (44) and (50), we have that where the last equality is using and . Combining with (42), (44), and (50), a suitable rearrangement shows that has the expression (32).
Finally, we show that, when for sufficiently small , the formula in (32) reduces to the one in (37). Indeed, an elementary calculation yields This implies that if for sufficiently small , that is, for sufficiently small , then , and hence . Thus, in (32) can be simplified as where the equality is using . The proof is complete.

As a consequence of Proposition 9, we have the following sufficient and necessary characterizations for the (continuously) differentiable points of and .

Corollary 10. The function is (continuously) differentiable at if and only if , where is defined in (31). Also, when , one has
The function is (continuously) differentiable at if and only if is invertible. Also, when is invertible, .

Proof. (a) The “if” part is direct by [13, Proposition 5.2]. We next prove the “only if” part. If is differentiable at , then is a linear function of , which by Proposition 9 holds only if . The formula of is given in [13].
(b) Since , by part (a) is (continuously) differentiable at if and only if , which is equivalent to requiring that is invertible since always holds. When is invertible, the formula of follows from part (a).

For any given with , define for any , and let . Then, comparing with (33), we can rewrite the function as Note that the Euclidean norm is globally Lipschitz continuous and strongly semismooth everywhere in , and is a linear function. Then, (57) implies that is globally Lipschitz continuous and strongly semismooth everywhere in by [17, Theorem 19]. Also, the function is differentiable at if and only if . The following lemma characterizes the -subdifferential of the function at the origin.

Lemma 11. For any given with , let be defined by (33). Then, the B-subdifferential of the function at takes the following form:

Proof. Let . By the definition of the elements in , there exists a sequence in converging to with such that By (57), a simple computation shows that such belongs to the set on the right hand side of (58). Thus, is included in the set on the right hand side of (58). In fact, and in (58) are the limit points of and , respectively.
Conversely, let be an arbitrary element of the set on the right hand side of (58). Then, there exists a with such that Take the sequence with and . Clearly, as . Also, by Lemma 5, it is easy to verify that This shows that and . Hence, . Thus, the set on the right hand side of (58) is included in .

Now we may prove the equivalence between the -subdifferential of at a general point and that of its directional derivative function at . This result corresponds to that of [18, Lemma 14] established for the NR SOC function .

Lemma 12. For any given , let . Then,

Proof. The result is direct by Proposition 9(a)-(b) and Lemma A.1 in the Appendix.

Using Lemma 12, we may present an upper estimation for Clarke's Jacobian of at the point with , which will be used in the next section.

Proposition 13. For any given with , one has where and are real symmetric matrices defined as follows:

Proof. We first make simplifications for the last two terms in (32) by . Note that where the last equality is using . Therefore, from (32), we have Now, applying Lemma 12, we immediately obtain that where, by Lemma 11 and the definition of Clarke's Jacobian,
Let with . Then, it suffices to prove that such and satisfy all inequalities and equalities in (63). By (68), there exists a vector with such that Using Lemma 5, it is immediate to obtain that This means that . Similarly, we also have . We next prove that . By Lemma 5, it is easy to verify that