Research Article | Open Access
Biao Qu, Jing Zhao, "Methods for Solving Generalized Nash Equilibrium", Journal of Applied Mathematics, vol. 2013, Article ID 762165, 6 pages, 2013. https://doi.org/10.1155/2013/762165
Methods for Solving Generalized Nash Equilibrium
The generalized Nash equilibrium problem (GNEP) is an extension of the standard Nash equilibrium problem (NEP), in which each player's strategy set may depend on the rival player's strategies. In this paper, we present two descent type methods. The algorithms are based on a reformulation of the generalized Nash equilibrium using Nikaido-Isoda function as unconstrained optimization. We prove that our algorithms are globally convergent and the convergence analysis is not based on conditions guaranteeing that every stationary point of the optimization problem is a solution of the GNEP.
The generalized Nash equilibrium problem (GNEP for short) is an extension of the standard Nash equilibrium problem (NEP for short), in which the strategy set of each player depends on the strategies of all the other players as well as on his own strategy. The GNEP has recently attracted much attention due to its applications in various fields like mathematics, computer science, economics, and engineering [1–11]. For more details, we refer the reader to a recent survey paper by Facchinei and Kanzow  and the references therein.
Let us first recall the definition of the GNEP. There are players labelled by an integer . Each player controls the variables . Let be the vector formed by all these decision variables, where . To emphasize the th player variable within the vector , we sometimes write , where denotes all the other player's variables. In the games, each player controls the variables and tries to minimize a cost function subjects to the constraint with given as exogenous, where is a common strategy set. A vector is called a solution of the GNEP or a generalized Nash equilibrium, if for each player , solves the following optimization problem with being fixed:
If is defined as the Cartesian product of certain sets , that is, , then the GNEP reduces to the standard Nash equilibrium problem.
Throughout this paper, we can make the following assumption.
Assumption 1. (a) The set is nonempty, closed, and convex.
(b) The utility function is continuously differentiable and, as a function of alone, convex.
A basic tool for both the theoretical and the numerical solution of (generalized) Nash equilibrium problems is the Nikaido-Isoda function defined as
Definition 1. is a normalized Nash equilibrium of the GNEP, if holds, where denotes the Nikaido-Isoda function defined as (2).
In order to overcome the nondifferentiable property of the mapping , von Heusinger and Kanzow  used a simple regularization of the Nikaido-Isoda function. For a parameter , the following regularized Nikaido-Isoda function was considered: Since under the given Assumption 1, is strongly concave in , the maximization problem has a unique solution for each , denoted by .
The corresponding value function is then defined by
Let be a given parameter. The corresponding value function is then defined by
In , the following important properties of the function have been proved.
Theorem 2. The following statements hold: (a) for any ; (b) is a normalized Nash equilibrium of the GNEP if and only if ; (c) is continuously differentiable on and that
In this paper, we develop two new descent methods for finding a normalized Nash equilibrium of the GNEP by solving the optimization problem (9). The key to our methods is a strategy for adjusting and when a stationary point of is not a solution of the GNEP. We will show that our algorithms are globally convergent to a normalized Nash equilibrium under appropriate assumption on the cost function, which is not stronger than the one considered in .
The organization of the paper is as follows. In Section 2, we state the main assumption underlying our algorithms and present some examples of the GNEP satisfying it. In Section 3, we derive some useful properties of the function . In Section 4, we formally state our algorithms and prove that they are both globally convergent to a normalized Nash equilibrium.
2. Main Assumption
In order to construct algorithms and guarantee the convergence of them, we give the following assumption.
Assumption 2. For any and , if , we have
We next consider three examples which satisfy Assumption 2.
Example 3. Let us consider the case in which all the cost functions are separable, that is,
where is convex and . A simple calculation shows that, for any , we have
Hence Assumption 2 holds.
Example 4. Consider the case where the cost function is quadratic, that is,
For . We have
Therefore, if the matrix is positive semidefinite, Assumption 2 is satisfied.
In the following example, we show the relationship between our assumption and the one considered in  as follows.
For any , a given with , the inequality holds.
3. Properties of
Lemma 6. For any and , we have
Proof. Since satisfies the optimality condition, then
In a similar way, it follows that satisfies
Since as a function of alone is convex, we have respectively. Thus, using the definition of and (23), we have
Similarly, using the definition of and (24), we have
The proof is complete.
Lemma 7. Assume is bounded. For any and , we have
Proof. We have from (19) that
By the definition of , we have
Since and is bounded, we get that
This completes the proof.
Equation (8) and Assumption 2 yield where nonnegativity of follows from the inequalities (23) and (24). In particular, either is above a tolerance , in which case is a direction of sufficient descent for at or else, as we show in the lemma below, and is an approximate solution of the GNEP with accuracy depending on , , . This result will lead to our methods.
Lemma 8. For any and , we have where .
Proof. Inequality (32) follows immediately from (19) in Lemma 6.
The definition of implies that which proves the first inequality in (33).
Since is the sum of the nonnegative quantity with another nonnegative quantity (see (23) and (24)), we have
Thus, which is the second inequality in (33). This completes the proof.
4. Two Methods for Solving the GNEP
In this section, we introduce two methods for solving the GNEP, motivated by the D-gap function scheme for solving monotone variational inequalities [14, 15]. We first formally describe our methods below and then analyze their convergence using Lemma 8.
Algorithm 9. Choose an arbitrary initial point , and any . Choose any sequences of numbers , , , , such that
For , we iterate the following.
Iteration k. Choose any . Choose any and satisfying
Apply a descent method to the unconstrained minimization of the function , with as the starting point and using as a safeguard descent direction at , until the method generates an satisfying . The resulting is denoted by .
Theorem 10. Assume is bounded. Let be generated by Algorithm 9. Then is bounded; ; and every cluster point of is a normalized Nash equilibrium of the GNEP.
Proof. Denote . By (38), we have for and it follows from (37) that (, Lemma 3). For each , we have from Lemma 8 that (32) and (33) hold with , , . This together with and yields
where and . Since , the first inequality in (39) implies is bounded. Moreover, this also implies .
Since , the last two inequalities in (39) yield . Since for each , we have , and this yields for each cluster point of . Thus, each cluster point is a normalized Nash equilibrium of the GNEP. This completes the proof.
Algorithm 11. Choose any , any , and two sequences of nonnegative numbers such that
Choose any continuous function with . For , we iterate the following.
Iteration k. Choose any and then choose satisfying
Apply a descent method to the unconstrained minimization of the function with as the starting point. We assume the descent method has the property that the amount of descent achieved at per step is bounded away from zero whenever is bounded and is bounded away from zero. Then, either the method in a finite number of steps generates an x satisfying which we denote by , or else must decrease towards zero, in which case any cluster point of solves the GNEP.
Theorem 12. Assume is bounded. Let be generated by Algorithm 11. (a)Suppose is obtained for all . Then, is bounded; ; , and every cluster point of is a normalized Nash equilibrium of the GNEP. (b)Suppose is not obtained for some . Then, the descent method generates a bounded sequence of with so every cluster point of solves the GNEP.
Proof. (a) Since we use a descent method at iteration to obtain from , then , so (41) yields
Denote . This can then be written as for . Using and (41), it follows that the sequence converges to some (, Lemma 2). Since (32) implies the sequence is bounded.
We claim that . Suppose the contrary. Then for all sufficiently large, it holds that . Then,
Since, by the construction of the algorithm, and , and is bounded (as are and ), we get
Then , so
Since satisfies (42), for all , this contradicts convergence of (recall that is a continuous function). Hence, . For each , we have from Lemma 8 that (33) holds with , , and from the fact that
This, together with , yields where and . Since , (44) implies is bounded. Moreover, (44) implies . Also, we have , so.
From the facts that , (49), and , we get . Since for each , we have from the definition of that which yields for each cluster point of . Thus, each cluster point is a normalized Nash equilibrium of the GNEP.
(b) It is easy to proof that . Hence is a normalized Nash equilibrium of the GNEP.
The proof is completed.
This research was partly supported by the National Natural Science Foundation of China (11271226, 10971118) and the Promotive Research Fund for Excellent Young and Middle-Aged Scientists of Shandong Province (BS2010SF010).
- A. Dreves and C. Kanzow, “Nonsmooth optimization reformulations characterizing all solutions of jointly convex generalized Nash equilibrium problems,” Computational Optimization and Applications, vol. 50, no. 1, pp. 23–48, 2011.
- A. Dreves, C. Kanzow, and O. Stein, “Nonsmooth optimization reformulations of player convex generalized Nash equilibrium problems,” Journal of Global Optimization, vol. 53, no. 4, pp. 587–614, 2012.
- F. Facchinei and C. Kanzow, “Generalized Nash equilibrium problems,” Annals of Operations Research, vol. 175, pp. 177–211, 2010.
- F. Facchinei, A. Fischer, and V. Piccialli, “On generalized Nash games and variational inequalities,” Operations Research Letters, vol. 35, no. 2, pp. 159–164, 2007.
- F. Facchinei and C. Kanzow, “Generalized Nash equilibrium problems,” A Quarterly Journal of Operations Research, vol. 5, no. 3, pp. 173–210, 2007.
- D. Han, H. Zhang, G. Qian, and L. Xu, “An improved two-step method for solving generalized Nash equilibrium problems,” European Journal of Operational Research, vol. 216, no. 3, pp. 613–623, 2012.
- P. T. Harker, “Generalized Nash games and quasi-variational inequalities,” European Journal of Operational Research, vol. 54, no. 1, pp. 81–94, 1991.
- A. von Heusinger and C. Kanzow, “Optimization reformulations of the generalized Nash equilibrium problem using Nikaido-Isoda-type functions,” Computational Optimization and Applications, vol. 43, no. 3, pp. 353–377, 2009.
- B. Panicucci, M. Pappalardo, and M. Passacantando, “On solving generalized Nash equilibrium problems via optimization,” Optimization Letters, vol. 3, no. 3, pp. 419–435, 2009.
- B. Qu and J. G. Jiang, “On the computation of normalized nash equilibrium for generalized nash equilibrium problem,” Journal of Convergence Information Technology, vol. 7, no. 22, pp. 16–21, 2012.
- J. Zhang, B. Qu, and N. Xiu, “Some projection-like methods for the generalized Nash equilibria,” Computational Optimization and Applications, vol. 45, no. 1, pp. 89–109, 2010.
- S. D. Flåm and A. S. Antipin, “Equilibrium programming using proximal-like algorithms,” Mathematical Programming, vol. 78, no. 1, pp. 29–41, 1997.
- S. D. Flåm and A. Ruszczyński, “Noncooperative convex games: computing equilibrium by partial regularization,” Working Paper 94–42, Laxenburg, Austria, 1994.
- M. V. Solodov and P. Tseng, “Some methods based on the D-gap function for solving monotone variational inequalities,” Computational Optimization and Applications, vol. 17, no. 2-3, pp. 255–277, 2000.
- B. Qu, C. Y. Wang, and J. Z. Zhang, “Convergence and error bound of a method for solving variational inequality problems via the generalized D-gap function,” Journal of Optimization Theory and Applications, vol. 119, no. 3, pp. 535–552, 2003.
- B. T. Polyak, Introduction to Optimization, Translations Series in Mathematics and Engineering, Optimization Software, New York, NY, USA, 1987.
Copyright © 2013 Biao Qu and Jing Zhao. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.