Abstract

The generalized Nash equilibrium problem (GNEP) is an extension of the standard Nash equilibrium problem (NEP), in which each player's strategy set may depend on the rival player's strategies. In this paper, we present two descent type methods. The algorithms are based on a reformulation of the generalized Nash equilibrium using Nikaido-Isoda function as unconstrained optimization. We prove that our algorithms are globally convergent and the convergence analysis is not based on conditions guaranteeing that every stationary point of the optimization problem is a solution of the GNEP.

1. Introduction

The generalized Nash equilibrium problem (GNEP for short) is an extension of the standard Nash equilibrium problem (NEP for short), in which the strategy set of each player depends on the strategies of all the other players as well as on his own strategy. The GNEP has recently attracted much attention due to its applications in various fields like mathematics, computer science, economics, and engineering [111]. For more details, we refer the reader to a recent survey paper by Facchinei and Kanzow [3] and the references therein.

Let us first recall the definition of the GNEP. There are players labelled by an integer . Each player controls the variables . Let be the vector formed by all these decision variables, where . To emphasize the th player variable within the vector , we sometimes write , where denotes all the other player's variables. In the games, each player controls the variables and tries to minimize a cost function subjects to the constraint with given as exogenous, where is a common strategy set. A vector is called a solution of the GNEP or a generalized Nash equilibrium, if for each player , solves the following optimization problem with being fixed:

If is defined as the Cartesian product of certain sets , that is, , then the GNEP reduces to the standard Nash equilibrium problem.

Throughout this paper, we can make the following assumption.

Assumption 1. (a) The set is nonempty, closed, and convex.
(b) The utility function is continuously differentiable and, as a function of alone, convex.

A basic tool for both the theoretical and the numerical solution of (generalized) Nash equilibrium problems is the Nikaido-Isoda function defined as

Sometimes also the name Ky-Fan function can be found in the literature, see [12, 13]. In the following, we state a definition which we have taken from [9].

Definition 1. is a normalized Nash equilibrium of the GNEP, if holds, where denotes the Nikaido-Isoda function defined as (2).

In order to overcome the nondifferentiable property of the mapping , von Heusinger and Kanzow [8] used a simple regularization of the Nikaido-Isoda function. For a parameter , the following regularized Nikaido-Isoda function was considered: Since under the given Assumption 1, is strongly concave in , the maximization problem has a unique solution for each , denoted by .

The corresponding value function is then defined by

Let be a given parameter. The corresponding value function is then defined by

Define

In [8], the following important properties of the function have been proved.

Theorem 2. The following statements hold: (a) for any ; (b) is a normalized Nash equilibrium of the GNEP if and only if ; (c) is continuously differentiable on and that

From Theorem 2, we know that the normalized Nash equilibrium of the GNEP is precisely the global minima of the smooth unconstrained optimization problem (see [5]) as with zero optimal value.

In this paper, we develop two new descent methods for finding a normalized Nash equilibrium of the GNEP by solving the optimization problem (9). The key to our methods is a strategy for adjusting and when a stationary point of is not a solution of the GNEP. We will show that our algorithms are globally convergent to a normalized Nash equilibrium under appropriate assumption on the cost function, which is not stronger than the one considered in [8].

The organization of the paper is as follows. In Section 2, we state the main assumption underlying our algorithms and present some examples of the GNEP satisfying it. In Section 3, we derive some useful properties of the function . In Section 4, we formally state our algorithms and prove that they are both globally convergent to a normalized Nash equilibrium.

2. Main Assumption

In order to construct algorithms and guarantee the convergence of them, we give the following assumption.

Assumption 2. For any and , if , we have
We next consider three examples which satisfy Assumption 2.

Example 3. Let us consider the case in which all the cost functions are separable, that is, where is convex and . A simple calculation shows that, for any , we have
Hence Assumption 2 holds.

Example 4. Consider the case where the cost function is quadratic, that is, For . We have
Therefore, if the matrix is positive semidefinite, Assumption 2 is satisfied.

In the following example, we show the relationship between our assumption and the one considered in [8] as follows.

For any , a given with , the inequality holds.

Example 5. Consider the GNEP with as and the cost function and . The point is the unique normalized Nash equilibrium. For any , we have
Therefore Assumption 2 holds, but (16) does not hold for any .

3. Properties of

Lemma 6. For any and , we have

Proof. Since satisfies the optimality condition, then
In a similar way, it follows that satisfies
Since as a function of alone is convex, we have respectively. Thus, using the definition of and (23), we have
Similarly, using the definition of and (24), we have
The proof is complete.

Lemma 7. Assume is bounded. For any and , we have

Proof. We have from (19) that
By the definition of , we have
Since and is bounded, we get that
This completes the proof.

Equation (8) and Assumption 2 yield where nonnegativity of follows from the inequalities (23) and (24). In particular, either is above a tolerance , in which case is a direction of sufficient descent for at or else, as we show in the lemma below, and is an approximate solution of the GNEP with accuracy depending on , , . This result will lead to our methods.

Lemma 8. For any and , we have where .

Proof. Inequality (32) follows immediately from (19) in Lemma 6.
The definition of implies that which proves the first inequality in (33).
Since is the sum of the nonnegative quantity with another nonnegative quantity (see (23) and (24)), we have
Thus, which is the second inequality in (33). This completes the proof.

4. Two Methods for Solving the GNEP

In this section, we introduce two methods for solving the GNEP, motivated by the D-gap function scheme for solving monotone variational inequalities [14, 15]. We first formally describe our methods below and then analyze their convergence using Lemma 8.

Algorithm 9. Choose an arbitrary initial point , and any . Choose any sequences of numbers , , , , such that
For , we iterate the following.
Iteration k. Choose any . Choose any and satisfying
Apply a descent method to the unconstrained minimization of the function , with as the starting point and using as a safeguard descent direction at , until the method generates an satisfying . The resulting is denoted by .

Theorem 10. Assume is bounded. Let be generated by Algorithm 9. Then is bounded; ; and every cluster point of is a normalized Nash equilibrium of the GNEP.

Proof. Denote . By (38), we have for and it follows from (37) that ([16], Lemma 3). For each , we have from Lemma 8 that (32) and (33) hold with , , . This together with and yields where and . Since , the first inequality in (39) implies is bounded. Moreover, this also implies .
Since , the last two inequalities in (39) yield . Since for each , we have , and this yields for each cluster point of . Thus, each cluster point is a normalized Nash equilibrium of the GNEP. This completes the proof.

Algorithm 11. Choose any , any , and two sequences of nonnegative numbers such that
Choose any continuous function with . For , we iterate the following.
Iteration k. Choose any and then choose satisfying
Apply a descent method to the unconstrained minimization of the function with as the starting point. We assume the descent method has the property that the amount of descent achieved at per step is bounded away from zero whenever is bounded and is bounded away from zero. Then, either the method in a finite number of steps generates an x satisfying which we denote by , or else must decrease towards zero, in which case any cluster point of solves the GNEP.

Theorem 12. Assume is bounded. Let be generated by Algorithm 11. (a)Suppose is obtained for all . Then, is bounded; ; , and every cluster point of is a normalized Nash equilibrium of the GNEP. (b)Suppose is not obtained for some . Then, the descent method generates a bounded sequence of with so every cluster point of solves the GNEP.

Proof. (a) Since we use a descent method at iteration to obtain from , then , so (41) yields
Denote . This can then be written as for . Using and (41), it follows that the sequence converges to some ([16], Lemma 2). Since (32) implies the sequence is bounded.
We claim that . Suppose the contrary. Then for all sufficiently large, it holds that . Then,
Since, by the construction of the algorithm, and , and is bounded (as are and ), we get
Then , so
Since satisfies (42), for all , this contradicts convergence of (recall that is a continuous function). Hence, . For each , we have from Lemma 8 that (33) holds with , , and from the fact that
This, together with , yields where and . Since , (44) implies is bounded. Moreover, (44) implies . Also, we have , so.
From the facts that , (49), and , we get . Since for each , we have from the definition of that which yields for each cluster point of . Thus, each cluster point is a normalized Nash equilibrium of the GNEP.
(b) It is easy to proof that . Hence is a normalized Nash equilibrium of the GNEP.
The proof is completed.

Acknowledgments

This research was partly supported by the National Natural Science Foundation of China (11271226, 10971118) and the Promotive Research Fund for Excellent Young and Middle-Aged Scientists of Shandong Province (BS2010SF010).