Research Article | Open Access

# Neural Network to Solve Concave Games

**Academic Editor:**Daniel Thalmann

#### Abstract

The issue on neural network method to solve concave games is concerned. Combined with variational inequality, Ky Fan inequality, and projection equation, concave games are transformed into a neural network model. On the basis of the Lyapunov stable theory, some stability results are also given. Finally, two classic games’ simulation results are given to illustrate the theoretical results.

#### 1. Introduction

Recently, game theory has attracted considerable attention due to its extensive applications in economics, political science, and psychology, as well as logic and biology [1–5]. It has been widely recognized as an important tool in many fields.

For game theory, the existence and stability of Nash equilibrium point are the most concerned problems. In past decades, these problems have been widely researched. Up to now, many excellent papers and monographs can be found, such as [6–11]. However, most previously established theory results are difficult to be adopted in practical applications, since the existence results only tell us that, for a given game, the Nash equilibrium point exists, but they do not tell us what it is or how we can calculate it.

As is well known, the solving problem of Nash equilibrium point is as important as the existence and stability problems. In order to compute the Nash equilibrium point for a given game, all kinds of optimal algorithms and experiments have been derived in [12–15]. Among these methods, computer technique is one of the most popular ones. For a given game, by utilizing a computer programme to simulate the players, Nash equilibrium point can be approximately solved through computer logic calculation. However, when the quantity of players is large, the computation complexity and converge analysis must be considered.

Conversely, projection neural network for solving optimization problems has its distinctly superior. This point is elaborated in [16–18]. The first is that it has parallel computing ability. The second is that the solution naturally exists. The last is that it can be implemented by circuits easily. One natural question is, for a given game, whether we can compute the Nash equilibrium point by neural network. This idea motivates this study.

Combined with concave game theory, projection equation theory, variational inequality, Ky Fan inequality, and neural network method, we first established the relationship between neural network and concave games and pointed out that the equilibrium point of the constructed neural network is the Nash equilibrium point of our concerned game. Then, by using the Lyapunov stable theory, we analyzed the stability of the established neural network. Finally, two classic games are presented to illustrate the validity of the main results.

#### 2. -Person Noncooperative Games

Consider a typical -person noncooperative game as follows: let be the set of players. For each , , a metric space, denotes the strategy set and is the payoff function of th player, respectively.

For each , denote . For -person noncooperative games, one of the most important problem is to research whether there exists such that, , , where is called the Nash equilibrium point. And another problem is, if exists, what it is. To further discussion, the following basic assumptions, definitions, and lemmas are needed.

*Assumptions*. We have(1), strategy set is nonempty, closed, and convex.(2), payoff function is continuously differentiable, and, , is concave on .

*Definition 1. *Let be a convex subset of linear space , , ; if function satisfies
then is said to be quasiconcave.

*Definition 2. *Let be a Hausdorff topological space, is a functional, , and if, , there exists open neighborhood of such that ,
then functional is said to be lower semicontinuous.

*Remark 3. *Obviously, function being concave means that it is quasiconcave; functional being continuous means that it is lower semicontinuous.

Lemma 4 (Ky Fan inequality). *Let be a nonempty convex compact subset of Hausdorff linear topological space ; satisfies the following.*(1)* is lower semicontinuous on .*(2)* is quasiconcave on .*(3)*.**Then, there exists such that, , .*

Lemma 5. *Let be a nonempty, closed, and convex subset of , and function is continuously differentiable and concave; then, ,
**
where .*

From Lemma 5, the following results are obvious.

Lemma 6. *Let be a nonempty, closed, and convex subset of , and function is continuously differentiable and concave, ; then, if and only if the following variational inequality holds:
*

*Remark 7. *-person noncooperative game is called a concave one if the payoff function is continuous, and, is concave on . For concave game, there exists a special equivalence relation among the Brouwer fixed-point theory and variational inequality problem [19]; this relationship provides the theoretical basis to solve the Nash equilibrium point for concave game by neural networks.

#### 3. The Equivalence between Concave Games and Variational Inequalities

In this section, we will point out that any concave game can be transformed into a variational inequality problem equivalently, and we will utilize Ky Fan inequality to prove the existence of the Nash equilibrium point.

Theorem 8. *, strategy set is nonempty, closed, and convex, payoff function is continuously differentiable, and, is concave on . Then, is the Nash equilibrium point of concave game if and only if is a solution of the following variational inequality, namely:
*

*Proof. *, , set , and then , since ; from Lemma 6, we have . Conversely, if is the Nash equilibrium of concave game, then, , ; from Lemma 6, we have, . This completes the proof.

*Remark 9. *This proof is similar to the convex situation in [20]. In fact, from [20], one can obtain this result directly. Here, for the readability, we still give out the proof details.

*Remark 10. *From Theorem 8, one can see that the Nash equilibrium point problem of a concave game is equivalent to a variational inequality problem. In order to solve the Nash equilibrium point of a concave game, we only need to solve the related variational inequality.

Theorem 11. *, strategy set is nonempty, closed, and convex, payoff function is continuously differentiable, and, is concave on . Then, there exists , such that,
**Namely, the Nash equilibrium point of the concerned concave game exists.*

*Proof. *, define

Since is continuously differentiable, then it is easy to check that,(1), is continuous on ;(2), is concave on ;(3), .

From Lemma 4, there exist such that, , we have ; namely, , , and this completes the proof.

*Remark 12. *From Theorem 11, one can see that if and satisfy assumptions and , the solution of the variational inequality in Theorem 11 exists, which means that the Nash equilibrium point of our concerned concave game exists. Combined with Theorems 8 and 11, we can construct neural network model to solve concave game problems.

#### 4. Neural Network Model for Concave Games

##### 4.1. Neural Network Model Construction

To proceed, we first introduce an important lemma as follows.

Lemma 13 (see [21]). *Let be continuous function, and is subset of ; then satisfies for all if and only if is the fixed point of equation , where is arbitrary positive constant and is projection operator defined by
*

On the basis of Theorems 8 and 11 and Lemma 13, we can construct the following neural network to solve the Nash equilibrium point of our concerned concave game: where is the state vector, , ,,, and is the payoff function of th player.

*Remark 14. *From Lemma 13, one can see that is an equilibrium point of system (9) if and only if it is a Nash equilibrium point of our concerned concave games; notice Theorem 11; one can obtain that the equilibrium point of system (9) exists. Thus, if the equilibrium point of system (9) is asymptotically stable, we can solve the Nash equilibrium point through neural network (9), which can be implemented by electric circuit. This means that Nash equilibrium point can be solved by electric circuit.

*Remark 15. *If function is convex and , system (9) becomes a typical projection neural network model, which is widely researched in [22–25]. Similar to [24], we will give the stability analysis as follows.

##### 4.2. Stability Analysis

Set as the equilibrium point (also the Nash equilibrium point) of system (9), and let ; then system (9) can be transformed into

Lemma 16 (see [21]). *Let be a closed convex set of ; then projection operator satisfies
**
where denotes norm.*

Theorem 17. *Under assumptions (1) and (2), the state vector of system (9) globally asymptotically converges to the Nash equilibrium point.*

*Proof. *Set as the equilibrium point set of system (9); obviously, . If state vector , then the conclusion holds naturally. Without loss of generality, we assume that . In this case, construct the Lyapunov function as follows:

Since, , , we have , . The time derivative of along the trajectory of system (10) is given as

Notice that ; it follows that

Since , denote , where + , (14) can be rewritten as

Notice that ; we have , and additionally, from Lemma 16, we have ; thus, . On the basis of the Lyapunov stable theory, one can obtain that the state vector of system (9) globally asymptotically converges to the Nash equilibrium point. This completes the proof.

*Remark 18. *For convex optimization problems, projection neural network has similar property. Compared with previous work, our Lyapunov function sufficiently uses the property of Nash equilibrium point; thus, it is more simple and the proof is more concise. The reason is that, in our proof, the value of in Lemma 13 is set more appropriately.

*Remark 19. *Similar to the proof of [23], we can obtain the following propositions.

Proposition 20. *For any initial value , the solution of system (9) is unique, bounded, and .*

Proposition 21. *If the domain of payoff function is and continuous, under assumptions (1) and (2), for any initial value , there exists such that for all , the solution of system (9) satisfies .*

*Remark 22. *Proposition 20 means that, for any initial value in the strategy set , the solution through this initial value is unique and bounded, and the strategy set is an invariant set. Proposition 21 means that the strategy set is attractive.

*Remark 23. *On the basis of Propositions 20 and 21, we can further show that the equilibrium point of system (9) is not only globally asymptotically stable but also approximately exponentially stable.

Theorem 24. *If the domain of payoff function is and continuous, under assumptions (1) and (2), when is sufficiently small, then the state vector of system (9) approximately exponentially converges to the Nash equilibrium point, and the approximate convergence exponent is 1.*

*Proof. *From system (9), one can obtain

Set the initial value of system (9) as , and by differential theory and Lemma 13, we have

If , from Proposition 20, we have . Notice assumptions and ; one can obtain that is bounded, which can be assumed as . In this case, we have

By Gronwall-Bellman inequality, when is sufficiently small, one can obtain that
which means that, when and is sufficiently small, the state vector of system (9) approximately exponentially converges to the Nash equilibrium point, and the approximate convergence exponent is 1.

If , from Proposition 21, there exists such that, , . In this case, we have

Since is continuous on , under assumptions and , by Proposition 21, there exist positive constants such that , −. Thus, we have

By Gronwall-Bellman inequality, when is sufficiently small, one can obtain that
which means that, when and is sufficiently small, the state vector of system (9) still approximately exponentially converges to the Nash equilibrium point, and the approximate convergence exponent is 1. This completes the proof.

*Remark 25. *For system (9), when it is used to convex optimization problem, Theorem 24 does not require that exists, and and local bounded. Thus, the conditions of Theorem 24 are weaker than those derived in literature [24].

#### 5. Numerical Examples

In order to show the effectiveness of the technique proposed in this paper, we revisit two classic games as follows.

*Example 26 (Cournot competition). *Consider an industry comprised of two firms, and they choose output levels , , and have cost function , . The firms’ products are assumed to be perfect substitutes (the homogeneous-goods case). Let denote the aggregate industry output level; market demand for the perfect-substitutes case is a function of aggregate output and its inverse is denoted as . Thus, the firms’ profit functions can be written as

A Cournot equilibrium means that

For this game, the existence, uniqueness, and stability problems about the Cournot equilibrium have been deeply researched by many authors. Here, we are only concerned about how to calculate it when the special profit functions and parameters are given. In order to verify the technique established in this paper, we assume that , , , where are given positive constants and . In this case, as is well known, the Nash equilibrium point is [26]. Obviously, the strategy set , , , is nonempty, closed, and convex, and profit functions , , satisfy assumption . By Section 4, we can construct the following neural network to solve the Nash equilibrium: where is the state vector; , where

If , , from , we know that . In what follows, we will show that for initial value or , neural network (25) approximately exponentially converges to the Nash equilibrium point. Set , ; by simulation tool box, the simulation result is in Figure 1. From Figure 1, one can see that the state vector of neural network approximately exponentially converges to the Nash equilibrium point. Set ; by simulation tool box, the simulation result is in Figure 2. From Figure 2, one can see that the state vector of neural network also approximately exponentially converges to the Nash equilibrium point.

Additionally, if are given general nonlinear functions such that satisfies assumptions and , in this case, to calculate the Nash equilibrium point directly is difficult. However, by using the technique established in this paper, we still can get one Nash equilibrium point through simulation tool box.

*Example 27 (Hotelling competition). *Consider the typical Hotelling game with two firms and a continuum of consumers. These consumers are distributed on a linear city of unit length according to a uniform density function. Each consumer is entitled to buy at most one unit of the commodity. Set , , , which are the firm 's pricing strategy and demand function, respectively. is the distance between the location of the consumer and store and is the transportation cost coefficient. The production cost is assumed to be identical and equal to per unit for both firms. Thus, the firms’ profit functions can be written as

A Hotelling equilibrium means that

As is well known, if the Nash equilibrium point is [27]. Obviously, the strategy set , , , is nonempty, closed, and convex, and profit functions , , satisfy assumption . By Section 4, we can construct the following neural network to solve the Nash equilibrium point: where is the state vector; , where

Set , , ; from , we know that . Similar to Example 26, when initial value or , neural network (30) still approximately exponentially converges to the Nash equilibrium point . The simulation results can be seen in Figures 3 and 4. From Figures 3 and 4, one can see that the state vector of neural network approximately exponentially converges to the Nash equilibrium point.

Similarly, if , , are given general nonlinear functions such that satisfies assumptions and , in this case, to calculate the Nash equilibrium directly is difficult. However, by using the technique established in this paper, we still can get one Nash equilibrium point through simulation tool box.

*Remark 28. *The numerical simulation examples show that the results established in this paper are both valid from theoretical and practical points of view. These results provide us a new technique to solve the Nash equilibrium point of concave game. This new technique establishes a bridge between the neural network method and game theory and expands the scientific applications area of neural network.

*Remark 29. *It is worth to be pointed out that, by using the neural network technique derived in this paper to solve the Nash equilibrium point, the initial strategy value can be out of the strategy set; this phenomenon can provide us a new computer algorithm to solve the Nash equilibrium point. This new method is different from the traditional computer logic calculation and simulation method, which requires that every step’s strategy values must be in the strategy set. And the new technique can reduce computation complexity significantly.

*Remark 30. *The results obtained in this paper show that, theoretically, the Nash equilibrium points of all kinds of concave game can be solved by neural network technique. However, when the payoff function does not satisfy assumption , for example, when the payoff function is only lower semicontinuous, upper semicontinuous, or quasi-continuous, how to use neural network technique to solve the Nash equilibrium point still needs to be deeply researched. And this is our future research direction.

#### 6. Conclusion

The analysis results obtained in this paper imply that every concave game satisfying assumptions and can be equivalently transformed into a neural network model. There exists equivalence between the equilibrium point of neural network and the Nash equilibrium point of concave games. And the equilibrium point’s convergence is independent whether the initial value is in the strategy set or not. This means that concave games can be implemented by neural network, or even by hardware. Two classic games’ simulation results show that the results established in this paper are valid.

#### Conflict of Interests

The authors declare that there is no conflict of interests regarding the publication of this paper.

#### Acknowledgments

This work was supported by the China Postdoctoral Science Foundation Grant (2012 M521718) and the Soft Science Research Project in Guizhou Province ([2011]LKC2004).

#### References

- E. Rasmusen,
*Games and Information: An Introduction to Game Theory*, Blackwell Publishers, Oxford, UK, 2000. - N. McCarrty and A. Meirkowitze,
*Political Game Theory*, Cambridge University Press, Cambridge, UK, 2007. - C. F. Camerer,
*Behavioral Game Theory: Experiments in Strategic Interaction*, Princeton University Press, Princeton, NJ, USA, 2006. - W. Hodges, “Logic and Games,” in
*The Stanford Encyclopedia of Philosophy*, E. N. Zalta, Ed., 2013. View at: Google Scholar - T. L. Vincent and J. S. Brown,
*Evolutionary Game Theory, Natural Selection, and Darwinian Dynamics*, Cambridge University Press, Cambridge, UK, 2005. - G. Carmona,
*Existence and Stability of Nash Equilibrium*, World Scientific, River Edge, NJ, USA, 2013. - Y. H. Zhou, J. Yu, and S. W. Xiang, “Essential stability in games with infinitely many pure strategies,”
*International Journal of Game Theory*, vol. 35, no. 4, pp. 493–503, 2007. View at: Publisher Site | Google Scholar - J. Yu, H. Yang, and C. Yu, “Structural stability and robustness to bounded rationality for non-compact cases,”
*Journal of Global Optimization*, vol. 44, no. 1, pp. 149–157, 2009. View at: Publisher Site | Google Scholar - Z. Lin, “On existence of vector equilibrium flows with capacity constraints of arcs,”
*Nonlinear Analysis: Theory, Methods & Applications*, vol. 72, no. 3-4, pp. 2076–2079, 2010. View at: Publisher Site | Google Scholar - S. W. Xiang and Y. H. Zhou, “On essential sets and essential components of efficient solutions for vector optimization problems,”
*Journal of Mathematical Analysis and Applications*, vol. 315, no. 1, pp. 317–326, 2006. View at: Publisher Site | Google Scholar - Z. Yang and Y. J. Pu, “Existence and stability of solutions for maximal element theorem on Hadamard manifolds with applications,”
*Nonlinear Analysis: Theory, Methods & Applications*, vol. 75, no. 2, pp. 516–525, 2012. View at: Publisher Site | Google Scholar - J. Y. Halpern and R. Pass, “Algorithmic rationality: adding cost of computation to game theory,”
*ACM SIGecom Exchanges*, vol. 10, no. 2, pp. 9–15, 2011. View at: Google Scholar - S. Özyildirim and N. M. Alemdar, “Learning the optimum as a Nash equilibrium,”
*Journal of Economic Dynamics and Control*, vol. 24, no. 4, pp. 483–499, 2000. View at: Google Scholar - N. G. Pavlidis, K. E. Parsopoulos, and M. N. Vrahatis, “Computing Nash equilibria through computational intelligence methods,”
*Journal of Computational and Applied Mathematics*, vol. 175, no. 1, pp. 113–136, 2005. View at: Publisher Site | Google Scholar - R. Porter, E. Nudelman, and Y. Shoham, “Simple search methods for finding a Nash equilibrium,”
*Games and Economic Behavior*, vol. 63, no. 2, pp. 642–662, 2008. View at: Publisher Site | Google Scholar - X. L. Hu, “Dynamic system methods for solving mixed linear matrix inequalities and linear vector inequalities and equalities,”
*Applied Mathematics and Computation*, vol. 216, no. 4, pp. 1181–1193, 2010. View at: Publisher Site | Google Scholar - D. Wang, D. Liu, D. Zhao, Y. Huang, and D. Zhang, “A neural-network-based iterative GDHP approach for solving a class of nonlinear optimal control problems with control constraints,”
*Neural Computing and Applications*, vol. 22, no. 2, pp. 219–227, 2013. View at: Publisher Site | Google Scholar - W. Bian and X. P. Xue, “Neural network for solving constrained convex optimization problems with global attractivity,”
*IEEE Transactions on Circuits and Systems I*, vol. 60, no. 3, pp. 710–723, 2013. View at: Publisher Site | Google Scholar - J. Yu,
*Game Theory and Nonlinear Analysis*, Science Press, Beijing, China, 2008. - J. Yu,
*Game Theory and Nonlinear Analysis*, Science Press, Beijing, China, 2nd edition, 2011. - D. Kinderlehrer and G. Stampcchia,
*An Introduction to Variational Inequalities and Their Applications*, Academic Press, New York, NY, USA, 1980. - X. B. Liang and J. Wang, “A recurrent neural network for nonlinear optimization with a continuously differentiable objective function and bound constraints,”
*IEEE Transactions on Neural Networks*, vol. 11, no. 6, pp. 1251–1262, 2000. View at: Publisher Site | Google Scholar - Y. S. Xia and J. Wang, “On the stability of globally projected dynamical systems,”
*Journal of Optimization Theory and Applications*, vol. 100, no. 2, pp. 129–150, 1999. View at: Google Scholar - Y. M. Li, J. Z. Shen, and Z. B. Xu, “Global convergence analysis on projection-type neural networks,”
*Chinese Journal of Computers*, vol. 28, no. 7, pp. 1178–1184, 2005. View at: Google Scholar - Y. Q. Yang, J. Cao, X. Xu, and J. Liu, “A generalized neural network for solving a class of minimax optimization problems with linear constraints,”
*Applied Mathematics and Computation*, vol. 218, no. 14, pp. 7528–7537, 2012. View at: Publisher Site | Google Scholar - A. Cournot,
*Researches into the Mathematical Principles of the Theory of Wealth*, Macmillan, New York, NY, USA, 1987. - H. Hotelling, “Stability in competition,”
*Economic Journal*, vol. 39, no. 153, pp. 41–57, 1929. View at: Publisher Site | Google Scholar

#### Copyright

Copyright © 2014 Zixin Liu and Nengfa Wang. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.