Abstract and Applied Analysis

Volume 2013, Article ID 830698, 16 pages

http://dx.doi.org/10.1155/2013/830698

## A Smoothing Method with Appropriate Parameter Control Based on Fischer-Burmeister Function for Second-Order Cone Complementarity Problems

^{1}Department of Management System Science, Yokohama National University, 79-4 Tokiwadai, Hodogaya-ku, Yokohama 240-8501, Japan^{2}Department of Mathematical Information Science, Tokyo University of Science, 1-3 Kagurazaka, Shinjuku-ku, Tokyo 162-8601, Japan^{3}Graduate School of Information Sciences, Tohoku University, 6-3-09 Aramaki-aza Aoba, Aoba-ku, Sendai 980-8579, Japan

Received 21 January 2013; Accepted 24 March 2013

Academic Editor: Jein-Shan Chen

Copyright © 2013 Yasushi Narushima et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

#### Abstract

We deal with complementarity problems over second-order cones. The complementarity problem is an important class of problems in the real world and involves many optimization problems. The complementarity problem can be reformulated as a nonsmooth system of equations. Based on the smoothed Fischer-Burmeister function, we construct a smoothing Newton method for solving such a nonsmooth system. The proposed method controls a smoothing parameter appropriately. We show the global and quadratic convergence of the method. Finally, some numerical results are given.

#### 1. Introduction

In this paper, we consider the second-order cone complementarity problem (SOCCP) of the following form: where denotes the Euclidean inner product, , , is a continuously differentiable function, and denotes the Cartesian product of several second-order cones (SOCs), that is, with , , and

The SOCCP is a wide class of complementarity problems. For example, it involves the mixed complementarity problem (MCP) and the nonlinear complementarity problem (NCP) [1] as subclasses, since when for each =. Moreover, the second-order cone programming (SOCP) problem can be reformulated as an SOCCP by using the Karush-Kuhn-Tucker (KKT) conditions. Apart from them, some practical problems in the game theory [2, 3] and the architecture [4] can be reformulated as the SOCCP.

Much theoretical and algorithmic research has been made so far for solving the SOCCP. Fukushima et al. [5] showed that the *natural residual *function, also called the min function, and the *Fischer-Burmeister* function for the NCP can be extended to the SOCCP by using the Jordan algebra. They further constructed the smoothing functions for those SOC complementarity () functions and analyzed the properties of their Jacobian matrices. Hayashi et al. [6] proposed a smoothing method based on the natural residual and showed its global and quadratic convergence. On the other hand, Chen et al. [7] proposed another smoothing method with the natural residual in which the smoothing parameter is treated as a variable in contrast to [6]. Moreover, they showed the global and quadratic convergence of their method. Similar to Chen et al., Narushima et al. [8] proposed a smoothing method treating a smoothing parameter as a variable. They used the Fischer-Burmeister function instead of the natural residual function and also provided the global and quadratic convergence of the method.

In the present paper, we propose a smoothing method with the Fischer-Burmeister function for solving SOCCP (1). The main difference from the existing methods is twofold. (i)We do not assume the special structure on the function in SOCCP (1). In [6, 7, 9, 10], the authors focused on the following type of SOCCP: which is a special case of SOCCP (1) with for some continuously differentiable function . In [11–13], the authors studied the following type of SOCCP: which is obtained by letting where and are continuously differentiable functions. However, we assume neither (4) nor (6). Therefore, our method is applicable to a wider class of SOCCPs. (ii)In contrast to [8], we do not incorporate the smoothing parameter into the decision variable. We control the smoothing parameter appropriately in each iteration.

This paper is organized as follows. In Section 2, we give some preliminaries, which will be used in the subsequent analysis. In Section 3, we review the SOC C-function. In particular, we recall the property of the (smoothed) Fischer-Burmeister function. In Section 4, we propose an algorithm for solving the SOCCP and discuss its global and local convergence properties. In Section 5, we report some preliminary numerical results.

Throughout the paper, we use the following notations. Let and be the sets of nonnegative and positive reals. For a symmetric matrix , we write (resp., ) if is positive semidefinite (resp., positive definite). For any , , we write (resp., ) if (resp., ), and we denote by the Euclidean inner product, that is, . We use the symbol to denote the usual -norm of a vector or the corresponding induced matrix norm. We often write (possibly vacuous), instead of . In addition, we often regard as . We sometimes divide a vector according to the Cartesian structure of , that is, with . For any Fréchet-differentiable function , we denote its transposed Jacobian matrix at by . For a given set , , bd , and conv mean the interior, the boundary, and the convex hull of in , respectively.

#### 2. Some Preliminaries

In this section, we recall some background materials and preliminary results used in the subsequent sections.

First, we review the Jordan algebra associated with SOCs. For any and , the Jordan product associated with is defined as When , that is, the second components and are vacuous, we interpret that the second component in (7) is also vacuous. We will write to mean and write to mean the usual componentwise addition of vectors and . For the Jordan product, the identity element is defined by . It is easily seen that for any . For any , we define as Note that and that for , , , and . Although the Jordan product is not associative, associativity holds under the inner product in the sense that In addition, it follows readily from the definition of that (resp., ) for any , (resp., ).

For each , we define the symmetric matrix by which can be viewed as a linear mapping having the following properties.

*Property 1. *There holds that (a) and for any ; (b), and ; (c) is invertible whenever with
where denotes the determinant of .

An important character of the Jordan algebra is its spectral factorization. By the spectral factorization associated with SOC, any can be decomposed as where , and , are the spectral values and the associated spectral vectors of given by for , with being any vector in satisfying . If , the decomposition (12) is unique. We note again that when (viz., ), we have , . The spectral factorization associated with SOC leads to a number of interesting properties, some of which are as follows.

*Property 2. *For any , let , and , be the spectral values and the associated spectral vectors at . Then the following statements hold.(a), , .(b).(c) Let . Then . Moreover, , .

In what follows, we recall some definitions for functions and matrices. The semismoothness is a generalized concept of the smoothness, which was originally introduced by Mifflin [14] for functionals, and extended to vector-valued functions by Qi and Sun [15]. For vector-valued functions associated with SOC, see also the work of Chen et al. [16]. Now we give the definition of the Clarke subdifferential [17].

*Definition 1. *Let be a locally Lipschitzian function. The Clarke subdifferential of at is defined by
where is the set of points at which is differentiable.

Note that if is continuously differentiable at , then . We next give the definitions of the semismoothness and the strong semismoothness.

*Definition 2. *A directionally differentiable and locally Lipschitzian function is said to be semismooth at if
for any sufficiently small and , where
is the directional derivative of at along the direction . In particular, if can be replaced by , then function is said to be strongly semismooth.

It is known that if is (strongly) semismooth, then holds (see [18], e.g.).

The definitions below for a function can be found in [10, 13, 19].

*Definition 3 (see [10, 13]). *A function with is said to have (a)the Cartesian -property, if for any , with , there exists an index such that and ; (b)the uniform Cartesian -property, if there exists a constant such that, for any , , there exists an index such that .

By the definitions, it is clear that the Cartesian -property implies the Cartesian -property. Definition 3 is associated with SOCCP (3), while the following definitions are associated with SOCCP (5).

*Definition 4 (see [19]). *Let and be functions such that , with and . Then, and are said to have (a)the joint uniform Jordan -property, if there exists a constant such that
(b)the joint Cartesian weak coerciveness, if there exists a vector such that

Next we recall the concept of linear growth of a function, which is weaker than the global Lipschitz continuity.

*Definition 5 (see [19]). *A function is said to have linear growth, if there exists a constant such that for any .

The following definitions for a matrix are originally given in [8], which is a generalization of the mixed -property [1].

*Definition 6 (see [8]). *Let be a matrix partitioned as follows:
where , and . Then, is said to have (a) the Cartesian mixed -property, if the following statements hold: (1) has full column rank;(2) for any , with and such that , there exists an index such that and ; (b) the Cartesian mixed -property, if (1) of (a) and the following statement hold: for any , with and such that , there exists an index such that .

In the case (i.e., is vacuous) and , has the Cartesian mixed ()-property if and only if has the Cartesian ()-property (see [10, 13], e.g.). By the definitions, it is clear that the Cartesian mixed -property implies the Cartesian mixed -property. Moreover, when , the Cartesian mixed -property reduces to the mixed -property (see [1, page 1013]).

We now introduce the Cartesian mixed Jordan ()-property.

*Definition 7. *Let be a matrix partitioned as follows:
where , and . Then, is said to have (a) the Cartesian mixed Jordan -property, if the following statements hold: (1) has full column rank;(2) for any , with and such that , there exists an index such that and ; (b) the Cartesian mixed Jordan -property, if (1) of (a) and the following statement hold: for any , with and such that , there exists an index such that .

Note that the relation (resp., ) can be rewritten as (resp., ). By the definitions, it is clear that the Cartesian mixed Jordan -property implies the Cartesian mixed Jordan -property, and that the Cartesian mixed Jordan ()-property implies the Cartesian mixed ()-property. Similar to the Cartesian mixed -property, in the case , the Cartesian mixed Jordan -property also reduces to the mixed -property.

#### 3. SOC C-Function and Its Smoothing Function

In this section, we introduce the SOC C-function and its smoothing function. In Section 3.1, we give the concept of the SOC C-function to transform the SOCCP into a system of equations. We focus on the Fischer-Burmeister function as an SOC C-function and review some properties of the smoothed Fischer-Burmeister function in Section 3.2.

##### 3.1. SOC C-Function

First, we recall the concept of the SOC C-function.

*Definition 8. *A function is said to be an SOC complementarity function, if the following holds:

Let be defined as
where and are divided as and with , , , and are SOC C-functions. Then it follows from (22) that
Accordingly, SOCCP (1) is reformulated as a system of equations , where is defined by
Moreover, we also give a merit function defined by
Note that for any , and that if and only if is a solution of SOCCP (1).

There are many kinds of SOC C-functions. The natural residual function and the Fischer-Burmeister function are respectively defined by
where denotes the projection of onto the SOC . Fukushima et al. [5] showed that (22) holds for functions and . Chen et al. [7] and Hayashi et al. [6] proposed methods for solving SOCCP based on the natural residual function (27), whereas Narushima et al. [8] proposed methods for solving SOCCP based on the Fischer-Burmeister function (28).

In what follows, functions , , and denote , , and with , respectively. Also, functions , , and denote , , and with , respectively.

Recently, Bi et al. [20] showed the following inequality:
We see from (29) that the level-boundedness of is equivalent to that of .

##### 3.2. Smoothed FB Function and Its Properties

In this section, we consider the smoothing function associated with the Fischer-Burmeister function and give its properties and Jacobian matrix.

Since is not differentiable in general, we cannot apply conventional methods such as Newton’s method or Newton-based methods. We therefore consider the smoothed Fischer-Burmeister function , which was originally proposed by Kanzow [21] for solving NCP and generalized by Fukushima et al. [5] to SOCCP. Let be defined by for each , where is a smoothing parameter and . Then, the smoothed Fischer-Burmeister function is defined as Also, the smoothing function and the merit function are defined as respectively. Clearly, , and so and . We note that holds for any and (see [5] or [22]). From definition (32) of and (34), it follows that for any and .

In what follows, we write for any vector . Moreover, for convenience, we use the following notation. For any , and any , we write and define the functions , by Furthermore, we drop the subscript for for simplicity, and thus, Direct calculation yields Note that is actually independent of , so that hereafter we will write . We also easily get, for , where , and , are the spectral values and the associated spectral vectors of , respectively, with if , and otherwise, being any vector in satisfying .

Now we review some propositions needed to establish convergence properties of the smoothing Newton method. The following proposition gives explicit expression of the transposed Jacobian matrix with .

Proposition 9 (see [5]). *For any , is continuously differentiable on , and its transposed Jacobian matrix is given by
**
where denotes the block-diagonal matrix with block elements , and
**
with
*

In order to obtain the Newton step, the nonsingularity of is important. The next proposition establishes the nonsingularity of .

Proposition 10 (see [8]). *Let be an arbitrary nonzero number and let be an arbitrary triple such that the Jacobian matrix has the Cartesian mixed -property at , that is, satisfies
**
and
**
Then, the matrix given by (40) is nonsingular. *

The local Lipschitz continuity and the (strong) semismoothness of play a significant role in establishing locally rapid convergence.

Proposition 11. *The function is locally Lipschitzian on and, moreover, is semismooth on . In addition, if is locally Lipschitzian, then is strongly semismooth on . *

*Proof. * It follows from [23, Corollary 3.3] that is globally Lipschitzian and strongly semismooth. Since is a continuously differentiable function, is locally Lipschitzian on . Also, the (strong) semismoothness of can be easily shown from the strong semismoothness of and the (local Lipschitz) continuity of .

We define function by . It is easily seen that if and only if . Now we partition as , where In order to achieve locally rapid convergence of the method, we need to control the parameter so that the distance between and is sufficiently small. The following proposition is helpful to control the parameter appropriately.

Proposition 12 (see [22]). *Let be any point in . Let be any function such that . Let be given. Let be defined by
**
where
**
Then, for any such that ,
**
where denotes . *

#### 4. Smoothing Newton Method and Its Convergence Properties

In this section, we first propose an algorithm of the smoothing Newton method based on the Fischer-Burmeister function and its smoothing function. We then prove its global and -superlinear (-quadratic) convergence.

##### 4.1. Algorithm

We provide the smoothing Newton algorithm based on the Fischer-Burmeister function. In what follows, we write and for simplicity.

*Algorithm 13. *

*Step 0 . * Choose , , , , , , and .

Choose and . Let . Set .

*Step 1*If a stopping criterion, such as , is satisfied, then stop.

*.**Step 2.*

*Step 2.0.*Set and .

*Step 2.1.*Compute by solving

*Step 2.2.*If , then let and go to Step 3. Otherwise, go to Step 2.3.

*Step 2.3.*Let be the smallest nonnegative integer satisfying

Let and .

*Step 2.4.*If then let and go to Step 3. Otherwise, set and go back to Step 2.1.

*Step 3.*Update the parameters as follows:

*Step 4.*Set . Go back to Step 1.

Note that the proposed algorithm consists of the outer iteration steps and the inner iteration steps. Step 2 is the inner iteration steps with the variable and the counter , while the outer iteration steps have the variable and the counter .

From Step 3 of Algorithm 13 and (48), the following inequality holds: Letting be a solution of SOCCP (1), we have . Therefore, from (53) and the local Lipschitz continuity of , the following holds:

In the rest of this section, we consider convergence properties of Algorithm 13. In Section 4.2, we prove the global convergence of the algorithm, and in Section 4.3, we investigate local behavior of the algorithm. For this purpose, we make the following assumptions.

*Assumption 1. *(A1) The solution set of SOCCP (1) is nonempty and bounded. (A2) The function is level-bounded, that is, for any , the level set is bounded. (A3) For any and , is nonsingular.

From Proposition 10, Assumption (A3) holds if has the Cartesian mixed -property for any . The following remarks correspond to SOCCPs (3) and (5).

*Remark 14. *The case of SOCCP (3). If has the Cartesian -property, then with in (4) has the Cartesian mixed -property and vice versa. Note that has the Cartesian -property at any if has the Cartesian -property (see [10, 13] for the definition of the Cartesian -property for a matrix).

*Remark 15. *The case of SOCCP (5). If is nonsingular and has the Cartesian -property, then with in (6) has the Cartesian mixed -property.

Note that Assumption (A2) is equivalent to the coerciveness in the sense that We now give some sufficient conditions for Assumption (A2) in the case of SOCCP (3) or (5).

Lemma 16. *Consider SOCCP (5). Let be a function such that for any . Then is level-bounded if and only if is level-bounded. *

*Proof. *We use below condition (55) equivalent to the level-boundedness. We first assume that is level-bounded and claim that is level-bounded. Suppose the contrary. Then, we can find a sequence such that the sequence is bounded and . If is bounded, then we must have . Thus, from the inequality
and from the continuity of and , we have . Since this is not possible, is unbounded. By taking a subsequence if necessary, we may assume that as . Since is globally Lipschitzian by [23, Corollary 3.3], we have from (33) that, for any ,
where is a Lipschitz constant. Then, it follows from (57) and the level-boundedness of that , contradicting the boundedness of . This proves the level-boundedness of .

We next assume that is level-bounded. Let be an arbitrary sequence such that and let and . Then we have
Thus, from and the level-boundedness of , we have . Therefore, is level-bounded.

*Remark 17. *Consider SOCCP (3). Let be a function such that for any . Then, in the same way as in Lemma 16, we can show that is level-bounded if and only if is level-bounded (we have only to consider ).

We now provide some sufficient conditions under which Assumption (A2) holds.

Proposition 18. *Consider SOCCP (5). Assume that and have linear growth. Assume further that and satisfy one of the following statements: *(a) and have the joint uniform Jordan -property;(b) and have the joint Cartesian weak coerciveness. * Then is level-bounded. *

*Proof. * It follows from [19] that is level-bounded for each case. Thus from Lemma 16 and (29), we have desired results.

The following condition was given by Pan and Chen [13] to establish the level-boundedness property of the merit function defined in Remark 17.

*Condition A.* Consider SOCCP (3). For any sequence satisfying with , if there exists an index such that and are bounded below, and , then
Under Condition A, we have the following proposition, which corresponds to Proposition 5.2 of [13].

Proposition 19. *Consider SOCCP (3). Assume that has the uniform Cartesian -property and satisfies Condition A. Then is level-bounded.*

##### 4.2. Global Convergence

In this section, we show the global convergence of Algorithm 13. We first give the well-definedness of the algorithm.

Lemma 20. *Suppose that Assumption (A3) holds. Let be any fixed positive number. Every stationary point of satisfies . *

*Proof. * For each stationary point of , holds. Since, from and Assumption (A3), is nonsingular, we have , and thus, .

It follows from (35) that for any , and hence we have, from Assumption (A2) and (55), that is level-bounded for any fixed . Therefore, there exists at least one stationary point of . Thus from Lemma 20, the system has at least one solution, and hence, there exists a point satisfying in Step 2 at each iteration.

We are now ready to show the well-definedness of the algorithm.

Proposition 21. *Suppose that Assumptions (A2) and (A3) hold. Then Algorithm 13 is well-defined.*

*Proof. *To establish the well-definedness of Algorithm 13, we only need to prove the well-definedness and the finite termination property of Step 2 at each iteration. Now we fix and . Since is nonsingular for any by and Assumption (A3), is uniquely determined for any . In addition, we have
If , then Step 2 terminates in Step 2.2. If , then integer satisfying (50) can be found at Step 2.3, because and . Thus, Step 2 is well-defined at each iteration.

Next we prove the finite termination property of Step 2. To prove by contradiction, we assume that Step 2 never stops and then
holds for all . We consider two cases. (i) The case where there exists a subsequence such that . From the boundedness of the level set of at and the line search rule (50), is bounded. In addition, from the continuous differentiability of and Assumption (A3), is also bounded. Thus, there exists a subsequence such that
Now holds for all sufficiently large , and hence, we have
Passing to the limit with on the above inequality and taking (62) into account, we have
On the other hand, it follows from (61) that , which contradicts (65). (ii) The case where there exists such that for all . It follows from (50) that
which implies holds for sufficiently large . This contradicts (62). Therefore, the proof is complete.

In order to show the global convergence of the proposed method, we recall the mountain pass theorem (see [24], e.g.), which is as follows.

Lemma 22. *Let be a continuously differentiable and level-bounded function. Let be a nonempty and compact set and let be the minimum value of on bd , that is,
**
Assume that there exist vectors and such that and . Then, there exists a vector such that and . *

By using the mountain pass theorem, we can show the following global convergence property.

Theorem 23. *Suppose that Assumptions (A1)–(A3) hold. Then, any accumulation point of the sequence generated by Algorithm 13 is bounded, and hence, at least one accumulation point exists, and any such point is a solution of SOCCP (1). *

*Proof. * From the choices of and in Step 3 of Algorithm 13, and converge to zero. Since , we have . Thus, it follows from and (35) that . It implies from the continuity of that any accumulation point of the sequence is a solution of , and hence, it suffices to prove the boundedness of . To the contrary, we assume is unbounded. Then there exist an index set and a subsequence such that . Since, by Assumption (A1), the solution set is bounded, there exists a compact neighborhood of such that . From the boundedness of , for all sufficiently large . In addition, from , we have
for otherwise, there would exist with , that is, , which contradicts . Since is small enough for all sufficiently large , it follows from (35) that
holds for any . Now we take . Then (69) yields
Letting
we have from (68) and that . Therefore, it follows from (69) and (70) that, for all sufficiently large ,
On the other hand, since and , we get
for all sufficiently large.

Now we choose sufficiently large satisfying all the above arguments with and apply Lemma 22 with
Then there exists satisfying
which contradicts Lemma 20, and therefore the proof is complete.

##### 4.3. Local -Superlinear and -Quadratic Convergence

In Section 4.2, we have shown that the sequence is bounded, and any accumulation point of is a solution of SOCCP (1). In this section, we prove that is superlinearly convergent, or more strongly, quadratically convergent if is locally Lipschitzian. In order to establish the superlinear (quadratic) convergence of the algorithm, we need an assumption that every accumulation point of is nonsingular. We first consider a sufficient condition for this assumption to hold.

Let and be the sequences generated by Algorithm 13, and let be any accumulation point of . Then, by Theorem 23, is a solution of SOCCP (1). We call the following condition nondegeneracy of a solution of the SOCCP (see also [13, 25]).

*Definition 24. *Let be a solution of SOCCP (1) with , . Then we say that is nondegenerate if , or equivalently, for all .

For a nondegenerate solution, we have the next lemma.

Lemma 25. *Let be a nondegenerate solution of SOCCP (1), and put , for . Let be a nonzero number. Then, for each , the following holds:
*