Abstract

We deal with complementarity problems over second-order cones. The complementarity problem is an important class of problems in the real world and involves many optimization problems. The complementarity problem can be reformulated as a nonsmooth system of equations. Based on the smoothed Fischer-Burmeister function, we construct a smoothing Newton method for solving such a nonsmooth system. The proposed method controls a smoothing parameter appropriately. We show the global and quadratic convergence of the method. Finally, some numerical results are given.

1. Introduction

In this paper, we consider the second-order cone complementarity problem (SOCCP) of the following form: where denotes the Euclidean inner product, , , is a continuously differentiable function, and denotes the Cartesian product of several second-order cones (SOCs), that is, with , , and

The SOCCP is a wide class of complementarity problems. For example, it involves the mixed complementarity problem (MCP) and the nonlinear complementarity problem (NCP) [1] as subclasses, since when for each =. Moreover, the second-order cone programming (SOCP) problem can be reformulated as an SOCCP by using the Karush-Kuhn-Tucker (KKT) conditions. Apart from them, some practical problems in the game theory [2, 3] and the architecture [4] can be reformulated as the SOCCP.

Much theoretical and algorithmic research has been made so far for solving the SOCCP. Fukushima et al. [5] showed that the natural residual function, also called the min function, and the Fischer-Burmeister function for the NCP can be extended to the SOCCP by using the Jordan algebra. They further constructed the smoothing functions for those SOC complementarity () functions and analyzed the properties of their Jacobian matrices. Hayashi et al. [6] proposed a smoothing method based on the natural residual and showed its global and quadratic convergence. On the other hand, Chen et al. [7] proposed another smoothing method with the natural residual in which the smoothing parameter is treated as a variable in contrast to [6]. Moreover, they showed the global and quadratic convergence of their method. Similar to Chen et al., Narushima et al. [8] proposed a smoothing method treating a smoothing parameter as a variable. They used the Fischer-Burmeister function instead of the natural residual function and also provided the global and quadratic convergence of the method.

In the present paper, we propose a smoothing method with the Fischer-Burmeister function for solving SOCCP (1). The main difference from the existing methods is twofold. (i)We do not assume the special structure on the function in SOCCP (1). In [6, 7, 9, 10], the authors focused on the following type of SOCCP: which is a special case of SOCCP (1) with for some continuously differentiable function . In [11–13], the authors studied the following type of SOCCP: which is obtained by letting where and are continuously differentiable functions. However, we assume neither (4) nor (6). Therefore, our method is applicable to a wider class of SOCCPs. (ii)In contrast to [8], we do not incorporate the smoothing parameter into the decision variable. We control the smoothing parameter appropriately in each iteration.

This paper is organized as follows. In Section 2, we give some preliminaries, which will be used in the subsequent analysis. In Section 3, we review the SOC C-function. In particular, we recall the property of the (smoothed) Fischer-Burmeister function. In Section 4, we propose an algorithm for solving the SOCCP and discuss its global and local convergence properties. In Section 5, we report some preliminary numerical results.

Throughout the paper, we use the following notations. Let and be the sets of nonnegative and positive reals. For a symmetric matrix , we write (resp., ) if is positive semidefinite (resp., positive definite). For any , , we write (resp., ) if (resp., ), and we denote by the Euclidean inner product, that is, . We use the symbol to denote the usual -norm of a vector or the corresponding induced matrix norm. We often write (possibly vacuous), instead of . In addition, we often regard as . We sometimes divide a vector according to the Cartesian structure of , that is, with . For any Fréchet-differentiable function , we denote its transposed Jacobian matrix at by . For a given set , , bd , and conv  mean the interior, the boundary, and the convex hull of in , respectively.

2. Some Preliminaries

In this section, we recall some background materials and preliminary results used in the subsequent sections.

First, we review the Jordan algebra associated with SOCs. For any and , the Jordan product associated with is defined as When , that is, the second components and are vacuous, we interpret that the second component in (7) is also vacuous. We will write to mean and write to mean the usual componentwise addition of vectors and . For the Jordan product, the identity element is defined by . It is easily seen that for any . For any , we define as Note that and that for , , , and . Although the Jordan product is not associative, associativity holds under the inner product in the sense that In addition, it follows readily from the definition of that (resp., ) for any , (resp., ).

For each , we define the symmetric matrix by which can be viewed as a linear mapping having the following properties.

Property 1. There holds that (a) and for any ; (b), and ; (c) is invertible whenever with where denotes the determinant of .

An important character of the Jordan algebra is its spectral factorization. By the spectral factorization associated with SOC, any can be decomposed as where , and , are the spectral values and the associated spectral vectors of given by for , with being any vector in satisfying . If , the decomposition (12) is unique. We note again that when (viz., ), we have , . The spectral factorization associated with SOC leads to a number of interesting properties, some of which are as follows.

Property 2. For any , let , and , be the spectral values and the associated spectral vectors at . Then the following statements hold.(a), , .(b).(c) Let . Then . Moreover, , .

In what follows, we recall some definitions for functions and matrices. The semismoothness is a generalized concept of the smoothness, which was originally introduced by Mifflin [14] for functionals, and extended to vector-valued functions by Qi and Sun [15]. For vector-valued functions associated with SOC, see also the work of Chen et al. [16]. Now we give the definition of the Clarke subdifferential [17].

Definition 1. Let be a locally Lipschitzian function. The Clarke subdifferential of at is defined by where is the set of points at which is differentiable.

Note that if is continuously differentiable at , then . We next give the definitions of the semismoothness and the strong semismoothness.

Definition 2. A directionally differentiable and locally Lipschitzian function is said to be semismooth at if for any sufficiently small and , where is the directional derivative of at along the direction . In particular, if can be replaced by , then function is said to be strongly semismooth.

It is known that if is (strongly) semismooth, then holds (see [18], e.g.).

The definitions below for a function can be found in [10, 13, 19].

Definition 3 (see [10, 13]). A function with is said to have (a)the Cartesian -property, if for any , with , there exists an index such that and ; (b)the uniform Cartesian -property, if there exists a constant such that, for any , , there exists an index such that .
By the definitions, it is clear that the Cartesian -property implies the Cartesian -property. Definition 3 is associated with SOCCP (3), while the following definitions are associated with SOCCP (5).

Definition 4 (see [19]). Let and be functions such that , with and . Then, and are said to have (a)the joint uniform Jordan -property, if there exists a constant such that (b)the joint Cartesian weak coerciveness, if there exists a vector such that

Next we recall the concept of linear growth of a function, which is weaker than the global Lipschitz continuity.

Definition 5 (see [19]). A function is said to have linear growth, if there exists a constant such that for any .

The following definitions for a matrix are originally given in [8], which is a generalization of the mixed -property [1].

Definition 6 (see [8]). Let be a matrix partitioned as follows: where , and . Then, is said to have (a) the Cartesian mixed -property, if the following statements hold: (1) has full column rank;(2) for any , with and such that , there exists an index such that and ; (b) the Cartesian mixed -property, if (1) of (a) and the following statement hold: for any , with and such that , there exists an index such that .
In the case (i.e., is vacuous) and , has the Cartesian mixed ()-property if and only if has the Cartesian ()-property (see [10, 13], e.g.). By the definitions, it is clear that the Cartesian mixed -property implies the Cartesian mixed -property. Moreover, when , the Cartesian mixed -property reduces to the mixed -property (see [1, page 1013]).
We now introduce the Cartesian mixed Jordan ()-property.

Definition 7. Let be a matrix partitioned as follows: where , and . Then, is said to have (a) the Cartesian mixed Jordan -property, if the following statements hold: (1) has full column rank;(2) for any , with and such that , there exists an index such that and ; (b) the Cartesian mixed Jordan -property, if (1) of (a) and the following statement hold: for any , with and such that , there exists an index such that .
Note that the relation (resp., ) can be rewritten as (resp., ). By the definitions, it is clear that the Cartesian mixed Jordan -property implies the Cartesian mixed Jordan -property, and that the Cartesian mixed Jordan ()-property implies the Cartesian mixed ()-property. Similar to the Cartesian mixed -property, in the case , the Cartesian mixed Jordan -property also reduces to the mixed -property.

3. SOC C-Function and Its Smoothing Function

In this section, we introduce the SOC C-function and its smoothing function. In Section 3.1, we give the concept of the SOC C-function to transform the SOCCP into a system of equations. We focus on the Fischer-Burmeister function as an SOC C-function and review some properties of the smoothed Fischer-Burmeister function in Section 3.2.

3.1. SOC C-Function

First, we recall the concept of the SOC C-function.

Definition 8. A function is said to be an SOC complementarity function, if the following holds:
Let be defined as where and are divided as and with , , , and are SOC C-functions. Then it follows from (22) that Accordingly, SOCCP (1) is reformulated as a system of equations , where is defined by Moreover, we also give a merit function defined by Note that for any , and that if and only if is a solution of SOCCP (1).
There are many kinds of SOC C-functions. The natural residual function and the Fischer-Burmeister function are respectively defined by where denotes the projection of onto the SOC . Fukushima et al. [5] showed that (22) holds for functions and . Chen et al. [7] and Hayashi et al. [6] proposed methods for solving SOCCP based on the natural residual function (27), whereas Narushima et al. [8] proposed methods for solving SOCCP based on the Fischer-Burmeister function (28).
In what follows, functions , , and denote , , and with , respectively. Also, functions , , and denote , , and with , respectively.
Recently, Bi et al. [20] showed the following inequality: We see from (29) that the level-boundedness of is equivalent to that of .

3.2. Smoothed FB Function and Its Properties

In this section, we consider the smoothing function associated with the Fischer-Burmeister function and give its properties and Jacobian matrix.

Since is not differentiable in general, we cannot apply conventional methods such as Newton’s method or Newton-based methods. We therefore consider the smoothed Fischer-Burmeister function , which was originally proposed by Kanzow [21] for solving NCP and generalized by Fukushima et al. [5] to SOCCP. Let be defined by for each , where is a smoothing parameter and . Then, the smoothed Fischer-Burmeister function is defined as Also, the smoothing function and the merit function are defined as respectively. Clearly, , and so and . We note that holds for any and (see [5] or [22]). From definition (32) of and (34), it follows that for any and .

In what follows, we write for any vector . Moreover, for convenience, we use the following notation. For any , and any , we write and define the functions , by Furthermore, we drop the subscript for for simplicity, and thus, Direct calculation yields Note that is actually independent of , so that hereafter we will write . We also easily get, for , where , and , are the spectral values and the associated spectral vectors of , respectively, with if , and otherwise, being any vector in satisfying .

Now we review some propositions needed to establish convergence properties of the smoothing Newton method. The following proposition gives explicit expression of the transposed Jacobian matrix with .

Proposition 9 (see [5]). For any , is continuously differentiable on , and its transposed Jacobian matrix is given by where denotes the block-diagonal matrix with block elements , and with

In order to obtain the Newton step, the nonsingularity of is important. The next proposition establishes the nonsingularity of .

Proposition 10 (see [8]). Let be an arbitrary nonzero number and let be an arbitrary triple such that the Jacobian matrix has the Cartesian mixed -property at , that is, satisfies and Then, the matrix given by (40) is nonsingular.

The local Lipschitz continuity and the (strong) semismoothness of play a significant role in establishing locally rapid convergence.

Proposition 11. The function is locally Lipschitzian on and, moreover, is semismooth on . In addition, if is locally Lipschitzian, then is strongly semismooth on .

Proof. It follows from [23, Corollary 3.3] that is globally Lipschitzian and strongly semismooth. Since is a continuously differentiable function, is locally Lipschitzian on . Also, the (strong) semismoothness of can be easily shown from the strong semismoothness of and the (local Lipschitz) continuity of .

We define function by . It is easily seen that if and only if . Now we partition as , where In order to achieve locally rapid convergence of the method, we need to control the parameter so that the distance between and is sufficiently small. The following proposition is helpful to control the parameter appropriately.

Proposition 12 (see [22]). Let be any point in . Let be any function such that . Let be given. Let be defined by where Then, for any such that , where denotes .

4. Smoothing Newton Method and Its Convergence Properties

In this section, we first propose an algorithm of the smoothing Newton method based on the Fischer-Burmeister function and its smoothing function. We then prove its global and -superlinear (-quadratic) convergence.

4.1. Algorithm

We provide the smoothing Newton algorithm based on the Fischer-Burmeister function. In what follows, we write and for simplicity.

Algorithm 13.   
   Step 0. Choose , , , , , , and .
Choose and . Let . Set .
Step 1. If a stopping criterion, such as , is satisfied, then stop.
Step 2.
Step 2.0. Set and .
Step 2.1. Compute by solving
Step 2.2. If , then let and go to Step 3. Otherwise, go to Step 2.3.
Step 2.3. Let be the smallest nonnegative integer satisfying
Let and .
Step 2.4. If then let and go to Step 3. Otherwise, set and go back to Step 2.1.
Step 3. Update the parameters as follows:
Step 4. Set . Go back to Step 1.
Note that the proposed algorithm consists of the outer iteration steps and the inner iteration steps. Step 2 is the inner iteration steps with the variable and the counter , while the outer iteration steps have the variable and the counter .
From Step 3 of Algorithm 13 and (48), the following inequality holds: Letting be a solution of SOCCP (1), we have . Therefore, from (53) and the local Lipschitz continuity of , the following holds:
In the rest of this section, we consider convergence properties of Algorithm 13. In Section 4.2, we prove the global convergence of the algorithm, and in Section 4.3, we investigate local behavior of the algorithm. For this purpose, we make the following assumptions.

Assumption 1. (A1) The solution set of SOCCP (1) is nonempty and bounded. (A2) The function is level-bounded, that is, for any , the level set is bounded. (A3) For any and , is nonsingular.

From Proposition 10, Assumption (A3) holds if has the Cartesian mixed -property for any . The following remarks correspond to SOCCPs (3) and (5).

Remark 14. The case of SOCCP (3). If has the Cartesian -property, then with in (4) has the Cartesian mixed -property and vice versa. Note that has the Cartesian -property at any if has the Cartesian -property (see [10, 13] for the definition of the Cartesian -property for a matrix).

Remark 15. The case of SOCCP (5). If is nonsingular and has the Cartesian -property, then with in (6) has the Cartesian mixed -property.

Note that Assumption (A2) is equivalent to the coerciveness in the sense that We now give some sufficient conditions for Assumption (A2) in the case of SOCCP (3) or (5).

Lemma 16. Consider SOCCP (5). Let be a function such that for any . Then is level-bounded if and only if is level-bounded.

Proof. We use below condition (55) equivalent to the level-boundedness. We first assume that is level-bounded and claim that is level-bounded. Suppose the contrary. Then, we can find a sequence such that the sequence is bounded and . If is bounded, then we must have . Thus, from the inequality and from the continuity of and , we have . Since this is not possible, is unbounded. By taking a subsequence if necessary, we may assume that as . Since is globally Lipschitzian by [23, Corollary 3.3], we have from (33) that, for any , where is a Lipschitz constant. Then, it follows from (57) and the level-boundedness of that , contradicting the boundedness of . This proves the level-boundedness of .
We next assume that is level-bounded. Let be an arbitrary sequence such that and let and . Then we have Thus, from and the level-boundedness of , we have . Therefore, is level-bounded.

Remark 17. Consider SOCCP (3). Let be a function such that for any . Then, in the same way as in Lemma 16, we can show that is level-bounded if and only if is level-bounded (we have only to consider ).

We now provide some sufficient conditions under which Assumption (A2) holds.

Proposition 18. Consider SOCCP (5). Assume that and have linear growth. Assume further that and satisfy one of the following statements: (a) and have the joint uniform Jordan -property;(b) and have the joint Cartesian weak coerciveness. Then is level-bounded.

Proof. It follows from [19] that is level-bounded for each case. Thus from Lemma 16 and (29), we have desired results.

The following condition was given by Pan and Chen [13] to establish the level-boundedness property of the merit function defined in Remark 17.

Condition A. Consider SOCCP (3). For any sequence satisfying with , if there exists an index such that and are bounded below, and , then Under Condition A, we have the following proposition, which corresponds to Proposition 5.2 of [13].

Proposition 19. Consider SOCCP (3). Assume that has the uniform Cartesian -property and satisfies Condition A. Then is level-bounded.

4.2. Global Convergence

In this section, we show the global convergence of Algorithm 13. We first give the well-definedness of the algorithm.

Lemma 20. Suppose that Assumption (A3) holds. Let be any fixed positive number. Every stationary point of satisfies .

Proof. For each stationary point of , holds. Since, from and Assumption (A3), is nonsingular, we have , and thus, .

It follows from (35) that for any , and hence we have, from Assumption (A2) and (55), that is level-bounded for any fixed . Therefore, there exists at least one stationary point of . Thus from Lemma 20, the system has at least one solution, and hence, there exists a point satisfying in Step 2 at each iteration.

We are now ready to show the well-definedness of the algorithm.

Proposition 21. Suppose that Assumptions (A2) and (A3) hold. Then Algorithm 13 is well-defined.

Proof. To establish the well-definedness of Algorithm 13, we only need to prove the well-definedness and the finite termination property of Step 2 at each iteration. Now we fix and . Since is nonsingular for any by and Assumption (A3), is uniquely determined for any . In addition, we have If , then Step 2 terminates in Step 2.2. If , then integer satisfying (50) can be found at Step 2.3, because and . Thus, Step 2 is well-defined at each iteration.
Next we prove the finite termination property of Step 2. To prove by contradiction, we assume that Step 2 never stops and then holds for all . We consider two cases. (i) The case where there exists a subsequence such that . From the boundedness of the level set of at and the line search rule (50), is bounded. In addition, from the continuous differentiability of and Assumption (A3), is also bounded. Thus, there exists a subsequence such that Now holds for all sufficiently large , and hence, we have Passing to the limit with on the above inequality and taking (62) into account, we have On the other hand, it follows from (61) that , which contradicts (65). (ii) The case where there exists such that for all . It follows from (50) that which implies holds for sufficiently large . This contradicts (62). Therefore, the proof is complete.

In order to show the global convergence of the proposed method, we recall the mountain pass theorem (see [24], e.g.), which is as follows.

Lemma 22. Let be a continuously differentiable and level-bounded function. Let be a nonempty and compact set and let be the minimum value of on bd , that is, Assume that there exist vectors and such that and . Then, there exists a vector such that and .

By using the mountain pass theorem, we can show the following global convergence property.

Theorem 23. Suppose that Assumptions (A1)–(A3) hold. Then, any accumulation point of the sequence generated by Algorithm 13 is bounded, and hence, at least one accumulation point exists, and any such point is a solution of SOCCP (1).

Proof. From the choices of and in Step 3 of Algorithm 13, and converge to zero. Since , we have . Thus, it follows from and (35) that . It implies from the continuity of that any accumulation point of the sequence is a solution of , and hence, it suffices to prove the boundedness of . To the contrary, we assume is unbounded. Then there exist an index set and a subsequence such that . Since, by Assumption (A1), the solution set is bounded, there exists a compact neighborhood of such that . From the boundedness of , for all sufficiently large . In addition, from , we have for otherwise, there would exist with , that is, , which contradicts . Since is small enough for all sufficiently large , it follows from (35) that holds for any . Now we take . Then (69) yields Letting we have from (68) and that . Therefore, it follows from (69) and (70) that, for all sufficiently large , On the other hand, since and , we get for all sufficiently large.
Now we choose sufficiently large satisfying all the above arguments with and apply Lemma 22 with Then there exists satisfying which contradicts Lemma 20, and therefore the proof is complete.

4.3. Local -Superlinear and -Quadratic Convergence

In Section 4.2, we have shown that the sequence is bounded, and any accumulation point of is a solution of SOCCP (1). In this section, we prove that is superlinearly convergent, or more strongly, quadratically convergent if is locally Lipschitzian. In order to establish the superlinear (quadratic) convergence of the algorithm, we need an assumption that every accumulation point of is nonsingular. We first consider a sufficient condition for this assumption to hold.

Let and be the sequences generated by Algorithm 13, and let be any accumulation point of . Then, by Theorem 23, is a solution of SOCCP (1). We call the following condition nondegeneracy of a solution of the SOCCP (see also [13, 25]).

Definition 24. Let be a solution of SOCCP (1) with , . Then we say that is nondegenerate if , or equivalently, for all .

For a nondegenerate solution, we have the next lemma.

Lemma 25. Let be a nondegenerate solution of SOCCP (1), and put , for . Let be a nonzero number. Then, for each , the following holds: where with Here and are defined by (37) with and . We also write , and set if , and otherwise, set to any vector in satisfying .

Proof. Since is a solution of SOCCP (1), it follows from [5, Propsition 4.2] that for all . Hence, from the nondegeneracy of , we have By Property 1(c), this implies that is nonsingular. In order to prove this lemma, it suffices to show Since (36) yields , (80) follows from Property 1(c).

The following proposition gives a sufficient condition for the nonsingularity of accumulation points of .

Proposition 26. Suppose that Assumptions (A1)–(A3) hold. Let be a sequence generated by Algorithm 13 and let be any accumulation point of it. Moreover, assume that is nondegenerate and the Jacobian matrix has the Cartesian mixed Jordan -property, that is, rank and Then, every accumulation point of is nonsingular.

Proof. By Theorem 23, the sequence is bounded and has at least one accumulation point . Hence, we may assume that converges to without loss of generality. It follows from Lemma 25 and that any accumulation point of , say , is given in the following form: In order to prove that is nonsingular, suppose that , where . We will show that . It follows from (82) that where , . Multiplying both sides of the above equations by from the left-hand side, we get for all , where the second equality uses the fact (see (79)). Suppose on the contrary that . Then from (83) and the assumption (81), we have that for some . Since , , by Property 1(b), we have and . By using Property 1(a), (85) can be rewritten equivalently as Multiplying both sides of the first equation in (87) by from the left, we have Since is positive semidefinite, we have . Similarly, multiplying both sides of the second equation in (87) by from the left, we have . Adding these two inequalities yields On the other hand, by the nondegeneracy of , we have . This together with (86) yields which contradicts (89), and hence, we must have . Then, since the matrix has full column rank, we also have from (83) that . Therefore, is nonsingular.

Next, we show the local convergence properties of the sequence generated by Algorithm 13. The following lemma plays a key role in proving such properties.

Lemma 27. Suppose that Assumptions (A1)–(A3) hold. Let be a sequence generated by Algorithm 13 and let be any accumulation point of it. In addition, assume that every accumulation point of is nonsingular. Then, for sufficiently close to , holds, where is defined by . Moreover, if is locally Lipschitzian and , then holds.

Proof. From Theorem 23, is bounded, and hence, there exists at least one accumulation point, and any such point satisfies . Since every accumulation point of is nonsingular, we have, from Assumption (A3), that there exists a constant such that for all . Let such that Note that exists for all , because is compact [17, page 70]. We have from (93) and that It follows from (54) that, for sufficiently close to , From (35), the inequality by Step 3 of Algorithm 13, and the local Lipschitz continuity of , we get Since, by Proposition 11, is semismooth, we have from (15) and (17) that Therefore, from (95)–(97) and , we obtain (91). Moreover, if is locally Lipschitzian, then, by Proposition 11, is strongly semismooth, and hence, we have Therefore, from (95)–(97) and , we obtain (92).

Using Lemma 27, we obtain the local -superlinear (-quadratic) convergence result.

Theorem 28. Suppose that Assumptions (A1)–(A3) hold. Let be a sequence generated by Algorithm 13. If every accumulation point of is nonsingular, then the following statements hold. (a)For all sufficiently large, is satisfied at in Step 2.2 of Algorithm 13. Moreover, for all sufficiently large, holds, where .(b)The whole sequence converges -superlinearly to a solution of SOCCP (1). Moreover, if is locally Lipschitz continuous and , then the sequence converges -quadratically.

Proof. Since (b) is directly obtained from (a) and Lemma 27, it suffices to prove (a). Namely, we prove that for all sufficiently large. We have from (93) that and hence, it follows from the boundedness of and the continuity of that is bounded. Let be any accumulation point of , and let be sufficiently close to . By Theorem 23, is a solution of SOCCP (1), and thus, . From the local Lipschitz continuity of , we may assume that is small enough. By Lemma 27 and (100), there exists a constant such that Therefore, we have from (35) with that This together with Lemma 27 and the local Lipschitz continuity of yields that Therefore, it follows again from (35) with that From (35), the choices of and in Step 3 of Algorithm 13, and , we get where , and hence, (104) yields . This implies that . Taking into account and , we have and . Since, by Lemma 27, remains in the neighborhood of , the desired results are obtained.

5. Numerical Experiments

In this section, we show some numerical results for Algorithm 13. The program was coded in MATLAB 7, and computations were carried out on a machine with Intel Core i7-3770 K CPU (3.50 GHz×2) and 8.0 GB RAM. We set the parameters , , , , , , and . We also set the function as follows (see [22] for details): for , and for . For all problems, we randomly chose the initial point whose components were distributed on the interval , by using rand command of MATLAB. The stopping criterion in Step 1 is relaxed to

We first solve the following second-order cone programming (SOCP) problem: which is reformulated as SOCCP (1) with equivalently. We generate one hundred test problems randomly such that there exist primal and dual strictly feasible solutions. Specifically, we first choose matrix whose components are distributed on , vectors and randomly, and then set and . Here, each component of is distributed on and is set to , where is chosen randomly, and is also generated similarly, while each component of is distributed on . In order to compare our method with another method, we solve SOCP (109) by SDPT3 [26, 27], which is the software of interior point methods for solving semidefinite, second-order cone, and linear programming problems. We use SDPT3 with the default parameter and option settings. The obtained results are shown in Table 1, in which “Iter” and “CPU” denote the average values of the number of iterations and the CPU time in seconds, respectively. In particular, the value of “Iter” in “our method” denotes the number of times that the Newton equations (49) have been solved. In the column of , denotes , for example. Since SDPT3 failed to solve some test problems when , the average values in this case were taken over the successful trials only. We see from Table 1 that our method is superior to or at least comparable with SDPT3 from the viewpoint of the number of iterations. On the other hand, from the viewpoint of CPU time, our method is also superior to SDPT3 for small-scale problems. However, SDPT3 outperforms our method for middle- or large-scale problems. We believe that this is because SDPT3 is coded to reduce the computational costs by means of some fundamental techniques on matrix computation and so forth. For further development of our method, we will need more appropriate tuning of our code. However, it is not the purpose of this paper.

In order to confirm the local behaviors of the sequence generated by Algorithm 13, we list the value of at each outer iteration in Table 2. In addition, to investigate how the parameter affects the rate of convergence, we performed the algorithm with . We also investigate the relation between the choices of and the behavior of ; we list the behaviors of . We chose one of the above test problems in the case . We note that means , for example. We see from Table 2 that the sequence generated by Algorithm 13 seems to converge -quadratically and the parameter does not affect the convergence of the sequence. On the other hand, we find that the choices of affect the behavior of .

The next experiment is an application of Algorithm 13 to the robust Nash equilibrium problem in the game theory. The robust Nash equilibrium [2, 3, 28, 29] is a new solution concept for noncooperative games with uncertain information. In this model, it is assumed that each player’s cost (pay-off) function and/or the opponents’ strategies are uncertain, but they belong to some uncertainty sets and each player chooses his strategy by taking the worst possible case into consideration. In other words, each player makes decision according to the robust optimization policy. In this experiment, we focus on the following 2-person robust Nash game with quadratic cost functions: where for are given matrices, is the vector of ones of appropriate dimension, and and denote the mixed strategies for Players 1 and 2, respectively. Moreover, and mean the estimation error or noise, and each player knows that they belong to the uncertainty sets and , respectively. Under this situation, the tuple is called a robust Nash equilibrium when and solve (111) and (112) simultaneously. In this experiment, we set and change the values of and variously. Since and are defined by means of Euclidean norm, the robust Nash equilibrium problem can be reformulated as an SOCCP equivalently (the reformulated SOCCP is explicitly written in Section  5.1.1 of [3]. We thus omit the details here). Here, we emphasize that the reformulated SOCCP cannot be expressed as any SOCP, and hence existing software such as SDPT3 cannot be applied. Moreover, if the reformulated SOCCP is rewritten of the form (1), then it satisfies neither (4) nor (6). The obtained results are summarized in Table 3, in which and denote the obtained robust Nash equilibria for various choices of uncertainty radiuses and . For all problems, we could calculate the robust Nash equilibria correctly. Moreover, as is discussed in the existing papers, we can observe that the robust Nash equilibria move smoothly as the values of and change gradually.

6. Conclusion

In this paper, we have proposed a smoothing Newton method with appropriate parameter control based on the Fischer-Burmeister function for solving the SOCCP. We have shown its global and -quadratic convergence properties under some assumptions. In addition, we have considered some sufficient conditions for the assumptions. In numerical experiments, we have confirmed the effectiveness of the proposed methods.

Acknowledgments

The authors would like to thank the academic editor Prof. Jein-Shan Chen and the anonymous reviewers who helped them improve the original paper. The first author is supported in part by the Grant-in-Aid for Scientific Research (C) 25330030 and Young Scientists (B) 25870239 of Japan Society for the Promotion of Science. The third author is supported in part by the Grant-in-Aid for Young Scientists (B) 22760064 of Japan Society for the Promotion of Science.