#### Abstract

Firstly, we give the Karush-Kuhn-Tucker (KKT) optimality condition of primal problem and introduce Jordan algebra simply. On the basis of Jordan algebra, we extend smoothing Fischer-Burmeister (F-B) function to Jordan algebra and make the complementarity condition smoothing. So the first-order optimization condition can be reformed to a nonlinear system. Secondly, we use the mixed line search quasi-Newton method to solve this nonlinear system. Finally, we prove the globally and locally superlinear convergence of the algorithm.

#### 1. Introduction

Linear second-order cone programming (SOCP) problems are convex optimization problems which minimize a linear function over the intersection of an affine linear manifold with the Cartesian product of second-order cones. Linear programming (LP), Linear second-order cone programming (SOCP), and semidefinite programming (SDP) all belong to symmetric cone analysis. LP is a special example of SOCP and SOCP is a special case of SDP. SOCP can be solved by the corresponding to the algorithm of SDP, and SOCP also has effectual solving method. Nesterov and Todd [1, 2] had an earlier research on primal-dual interior point method. In the rescent, it gives quick development about the solving method for SOCP. Many scholars concentrate on SOCP.

The primal and dual standard forms of the linear SOCP are given by

where the second-order cone : where refers to the standard Euclidean norm.

In this paper, the vectors , , and and the matrix are partitioned conformally, namely

Except for interior point method, semismoothing and smoothing Newton method can also be used to solve SOCP. In [3], the Karush-Kuhn-Tucker (KKT) optimality condition of primal-dual problem was reformulated to a semi-smoothing nonlinear system, which was solved by Newton method with central path. In [4], the KKT optimality condition of primal-dual problem was reformed to a smoothing nonlinear equations, then it was solved by combining Newton method with central path. References [3, 4] gave globally and locally quadratic convergent of the algorithm.

#### 2. Preliminaries and Algorithm

In this section, we introduce the Jordan algebra and give the nonlinear system, which comes from the Karush-Kuhn-Tucker (KKT) optimality condition. At last, we introduce two kinds of derivative-free line search rules.

Associated with each vector , there is an arrow-shaped matrix which is defined as follows:

Euclidean Jordan algebra is associated with second-order cones. For now we assume that all vectors consist of a single block . For two vectors and , define the following multiplication:

So, “+”, “” with give rise to a Jordan algebra associated with second-order cone .

It is well known that the vector has a spectral decomposition as where , and , are the spectral values and spectral vectors of given by where , and is any vector in satisfying .

The KKT optimality condition of problem (1) is written as follows:

Interior point methods typically deal with the following perturbation of the optimality condition (8):

In this paper, we structure a nonlinear system that is equivalent to (9). Then the nonlinear system is solved by quasi-Newton method to get the optimum solution of (1). Here, we introduce a smoothing function ,

Reference [5] gives some properties of smoothing function (10).

Proposition 1. * if and only if , and . *

Proposition 2. *For any , if and only if , and . *

Let ,

Apparently, is equivalent to (8). Let , then . So, the KKT optimality condition is equivalent to the following:

Next, we solve (12) by using Broyden rank one quasi-Newton method. When we solve the problem (12) with quasi-Newton method, the gradient or Jacobian does not appear. It can reduce the amount of calculation. While it is not suitable to use the usual line search such as Wolf or Powll rules. Thus, we suggest two kinds of derivative-free line search rule.

In 1986, Griewank [6] put forward a kind of monotonous line search. Set

Let satisfy the following inequality: where is a constant.

Due to (13) and (14), we obtain

Clearly, the line search is a normal descent method.

From the definition of , the following conclusion holds. When , (14) holds for all sufficiently small. When , (14) does not hold for any .

Because there is a failure in the line search rule, many scholars put forward some kinds of different derivative-free line search rules. In [7], Li and Fukushima suggested a kind of nonmonotone derivative-free line search rules. The step satisfies the following inequality: where, are constants, and there exists a constant , such that the positive sequence satisfies

Obviously, (16) holds for all sufficiently small. For any , we have

According to the introduction, we know that there is a defect in the line search rule (15), but it is monotonically decreasing, namely, . However (16) is non-monotone line search rule. In this paper, the reasonable combination of the two line search rules is given.

In order to implement the mixed line search rules, we define the function

Apparently, . When and are too small, (14) does not hold. We will use the non-monotone line search.

On the basis of the previous section, we will suggest a Broyden rank one quasi-Newton method for solving SOCP in this section. Now we present an algorithm for solving (12).

*Algorithm A. ** **Step *0. *Initialization and Date*. Choose parameters , , , , , , . Fix a positive sequence that satisfies (17). Fix a starting point , and an initial matrix . Set .*Step *1. *Termination Conditions*. If , stop; otherwise, let be a solution of the linear equation
*Step *. *The Line Search for Unit Step*. If
holds, then set . Go to .*Step *3. *Mixed Line Search Rule*.(3.1) Set ;(3.2) If satisfies (14), set ; otherwise, if holds, let be the maximum of the numbers satisfying (16). Else, set . Go back to Step .*Step *4. *Update*. Set .*Step *5. *Computation of *. By the Broyden rank one correction formula, we obtain
where , , and is nonsingular. *Step *6. *Set *. Go back to .

#### 3. Global Convergence

In this section, we prove the global convergence of algorithm. For this reason, we define some variables in the algorithm and give some hypotheses.

We define

then . Set

then

Proposition 3. *The sequence which is generated by the algorithm satisfies
**
then the level set
**
is bounded, where is the constant defined in (17). *

Proposition 4. * is Lipschitz continuous on ; that is, there exists a constant such that
*

*Proof. *By [5], is continuously differentiable function, then there exists a constant such that

For any , we have
(H1) The matrix has full of row rank.

From Proposition 3, Proposition 4, and (H1), is reversible for all . There exist constants , , , and , such that

Lemma 5. *If the sequence is generated by the algorithm, then . *

*Proof. *If is defined by or (14), then we have

The above inequality and (18) imply
where the sequence is generated by the algorithm. According to (36), we have

Hence, . The claim holds.

Lemma 6. *If (H1) holds and the sequence is generated by the algorithm, then
*

*Proof. * If is determined by (16), it is clearly known that

If is determined by , from it can be known that

If is ascertained by (14), then from (32), we have

Thus we obtain

Set . Then, from (39) and (42), we see that

Let and . It follows directly from (40) and (43) that

By summing both sides of the above inequality, we have

This completes the proof.

According to some results in [8, 9], we can obtain the following results.

Lemma 7. * Let the positive sequence and satisfy and , then the sequence converges. *

Lemma 8. *Let the sequence be generated by the algorithm, then the sequence is convergent. *

Lemma 9. * Let (H1) hold. Let the sequence be defined by (24) and let be generated by the algorithm. If
**
then
** Specifically, there is a subsequence of which converges to zero. Moreover, if
** then
** In particular, the whole sequence . *

Lemma 10. * Let (H1) hold and the sequence be generated by the algorithm. There exist sthe subsequence of that converges to , respectively. Furthermore, one has
*

* Proof. *Let and be defined by (23) and (24), respectively. By Lemmas 6 and 9, there is a subsequence of tending to zero, from which Lemma 5 implies that is bounded. Without loss of generality, we assume that there is a subsequence that converges to . Since Lemma 6 implies , then , . Therefore, there exists a constant such that for all sufficiently large. Thus from (20) and (25), we obtain

So there is a constant such that, for all sufficiently large,

Without loss of generality, we assume that converges to (necessarily, we can treat the sequence as its some subsequence). By (25), we obtain . It follows that as with . Thus, taking the limit in (20) as with , yields (50). The proof is finished.

Theorem 11. * Assume that (H1) holds, then the whole sequence generated by the algorithm converges to the unique solution of (12). *

*Proof. *It is clear from Lemma 8 that the sequence is convergent. Thus, it is sufficient to verify that there exists an accumulation point of that is the solution of (12).

(i) We suppose that there are an infinite number of such that are determined by (21). Let be an index set with (21)}. If , then holds.

If , then we have , in that way, . This implies that . Hence, .

(ii) If there exists an infinite number of such that are determined by (14). Let be an index set with (14)}. Since is bounded, there is a subsequence of that tends to , that is, . If , then implies that . By (50), we obtain .

If , then for with . By of the algorithm, does not satisfy (14) for all large enough, namely,

Thus,

that is,

By (50), we get .

(iii) According to (ii), without loss of generality, we suppose that all are determined by (16) for all sufficiently large. Let be an index set with (16)}. Set

then and . If , then . Thus from (50), we obtain . Otherwise, , that is,

By using of the algorithm, we know that does not satisfy (16) for all sufficiently large. Therefore, we have

Dividing the both sides by and then taking the limit as with , we get

that is,

It is clear from (50) that . Therefore the result holds.

#### 4. Local Superlinear Convergence

In this section, we prove the local superlinear convergence of the algorithm.

Lemma 12. *If (H1) holds and the sequence is generated by the algorithm, then there exist a constant and an index such that
**
whenever and . Furthermore, the relation
**
holds for all such that . *

*Proof. *By of the algorithm, there exists a constant such that (62) holds whenever and for all sufficiently large. It follows from Theorem 11 that the sequence converges to the unique solution of problem (12), and there exists a constant such that for all large enough. Then, from (52), we see that there exist constant and such that

whenever and for all sufficiently large. Due to (20), we obtain

Hence, we have
where the last inequality follows from (31). This implies that

Since holds, from (33), there exists such that

for all large enough. Then by (63), (65), and (66), we obtain
where . Let , then we have

whenever . Hence, when is sufficiently close to for all large enough, can satisfy (62), which prove the conclusion.

Theorem 13. * Assume that (H1) holds, then the sequence generated by the algorithm converges superlinearly to the unique solution of (12). *

*Proof. * From (65), it is sufficient to verify that the sequence . Let and be determined by Lemma 12. By Lemmas 6 and 9, we have

Then there exists an index such that

whenever . This implies that

The above inequality implies that for any , there are at least many such that , that is, . Let , then, by Lemma 12, for any , there exist at least many such that and

Therefore, we have

Let be the index set for which (74) holds, then . When , is determined by *.* Then by (15) and (18), we get

Let . From (74) and (18), we obtain

that is,

Since ,

Hence,

Since implies that

Therefore, as . The proof is finished.

#### 5. Numerical Experiments

In this section, we carry out a number of numerical experiments based on Algorithm A. The results show that Algorithm A is effective. The numerical experiments are implemented on MATLAB 7.8.0.

The following parameter values were used:

When the condition holds, the algorithm stops.

In the tables of test results, denotes the number of iterations, the final residual of when the algorithm stops, and the optimal values of the primal and dual problems of the test problems, and the dual gap of primal-dual problems. For the first nine experiments, the elements of the vectors , , matrix , and initial points , , are random numbers from 0 to 10. is the identity matrix.

All of the following experiments are to solve the problem

*Example 14. *The coefficients were chosen as

and the initial point was . Let , . The data of the result of the problem is listed in Table 1.

*Example 15. *The coefficients were chosen as

and the initial point was . Let , . The data of the result of the problem is listed in the Table 2.

*Example 16. *The coefficients were chosen as

and the initial point was . Let , . The data of the result of the problem is listed in the Table 3.

In Table 4, there are the final numerical results of the next six experiments, where the notations mean the number of the row and column of the matrix , respectively, and denotes the number of the numerical experiments. In these experiments, we let .

The ultimately numerical results of the last experiments were given in Table 5. All the matrices of that are in the examples are sparse matrices. The elements of these matrices , , , , , , are random numbers from 0 to 10. The rest elements of these matrices are zero. In and , we let . In , we set and .

From Table 5, we can see Algorithm A in this paper is efficient. The algorithm not only can solve the case of dense coefficient matrix , but also can solve the case of sparse coefficient matrix . From Table 5, we know that the cone , whether or not partitioned the algorithm, is efficient.

#### Acknowledgments

This work was supported in part by NNSF (no. 11061011) of China and Guangxi Fund for Distinguished Young Scholars (2012GXSFFA060003) and Innovative project of Guangxi graduate education (2011105950701M26).