Research Article  Open Access
Luping Liu, Wensheng Jia, "A New Algorithm to Solve the Generalized Nash Equilibrium Problem", Mathematical Problems in Engineering, vol. 2020, Article ID 1073412, 9 pages, 2020. https://doi.org/10.1155/2020/1073412
A New Algorithm to Solve the Generalized Nash Equilibrium Problem
Abstract
We try a new algorithm to solve the generalized Nash equilibrium problem (GNEP) in the paper. First, the GNEP is turned into the nonlinear complementarity problem by using the Karush–Kuhn–Tucker (KKT) condition. Then, the nonlinear complementarity problem is converted into the nonlinear equation problem by using the complementarity function method. For the nonlinear equation equilibrium problem, we design a coevolutionary immune quantum particle swarm optimization algorithm (CIQPSO) by involving the immune memory function and the antibody density inhibition mechanism into the quantum particle swarm optimization algorithm. Therefore, this algorithm has not only the properties of the immune particle swarm optimization algorithm, but also improves the abilities of iterative optimization and convergence speed. With the probability density selection and quantum uncertainty principle, the convergence of the CIQPSO algorithm is analyzed. Finally, some numerical experiment results indicate that the CIQPSO algorithm is superior to the immune particle swarm algorithm, the Newton method for normalized equilibrium, or the quasivariational inequalities penalty method. Furthermore, this algorithm also has faster convergence and better offline performance.
1. Introduction
In 1952, Debreu [1] came up with the generalized Nash equilibrium problem (GNEP), in which each player’s feasible strategy set depends on the strategies of other players. In the GNEP, no player can obtain more benefits through a unilateral change in his strategy. At present, the GNEP has attracted growing concerns, and the GNEP was widely used in many fields such as economics, pricing currency option, evolutionary biology, computer science, artificial intelligence, and autonomous cars [2, 3]. However, Debreu and Arrow had not given an efficient algorithm for the solution of the GENP, and Roughgarden [4] indicated that the solution of the Nash equilibrium is NPhard problem. The solution of the GNEP has attracted much attention: Facchinei and Kanzow [5] focused on penalty methods to solve the GNEP; Kunjan and Twinkle [6] showed a modified homotopy algorithm; Aussel [7] developed some characterizations of the GNEP as a quasivariational inequality; Migot and Cojocaru [8] presented variational inequality methods to track the solution set of the GNEP. Using the regularized indicator Nikaido–Isodatype function, Lalitha and Dhingra [9] presented two constraint optimization formulas for the GNEP; Dreves [10] considered a linear GNEP and a bestresponse approach for equilibrium selection in twoplayer GNEP; Izmailov and Solodov [11] analyzed error bounds and Newtontype methods for the GNEP. The abovementioned approaches for solving the GNEP have the common feature that they convert the GNEP into variational inequalities, quasivariational inequalities, or nonlinear equations under certain assumptions, and then they use the appropriate optimization algorithms to solve them. Inspired by the above work, this paper considers a new algorithm for solving the GNEP. First, the GNEP is transformed into the nonlinear complementarity problem by Karush–Kuhn–Tucker (KKT) condition. Second, the nonlinear complementarity problem is transformed into the nonlinear equation system problem by the complementarity function method. Finally, we use the CIQPSO algorithm to solve the GNEP by constructing an appropriate fitness function and analyze and compare some numerical examples to show that the algorithm is effective.
In recent years, swarm intelligence algorithm has shown the certain potential possibility for solving NPhard optimization problems [12, 13], and the swarm intelligence algorithm has many applied in power system management [14], petroleum industry [15], wireless sensor networks [16], and cloud computing multitask scheduling optimization [17] and so forth. It is well known that the GNEP usually has an infinite number of solutions, and the Nash equilibrium set is manifold if the players may share constraints in [18]. Kanzow and Steck [19] proposed an augmented Lagrangian algorithm to solve the GNEP, which they take into account the particular structure of GNEPs with suitable constraint qualifications. Pang and Fukushima [20] applied an augmented Lagrangiantype approach to quasivariational inequalities (QVIs). In [21], an improved version can be found for QVIs. With the development of the computation for Nash equilibrium, many scholars try to use the swarm intelligence algorithm from biological evolutionary process to solve Nash equilibrium problems, such as immune particle swarm algorithm [22] for solving Nash equilibrium of the GNEP, ant colony algorithm [23], and evolutionary algorithm [24] to attain with the Nash equilibrium of dynamic games and evolutionary games [25]. As mentioned above, it has become a new approach to realize Nash equilibrium by using the biological evolution and biological behavior law of swarm intelligence algorithm. Inspired by research works mentioned above, our paper mainly studies swarm intelligence algorithms to solve the GNEP. We presented a coevolutionary immune quantum particle swarm optimization (CIQPSO) algorithm by involving the immune memory function and the antibody density inhibition mechanism into the quantum particle swarm optimization algorithm and proved the convergence of the CIQPSO algorithm. The CIQPSO algorithm is able to improve global optimum capability. Moreover, the convergence of the CIQPSO algorithm is analyzed by introducing the probability density selection function. Meanwhile, some numerical examples show that the algorithm is effective. This paper is organized as follows. In Section 2, we introduce the model and assumptions of the GNEP. In Section 3, we consider the GNEP is transformed into the nonlinear equation problem through the Karush–Kuhn–Tucker (KKT) condition and the complementarity function method, respectively. In Section 4, we study the design of the CIQPSO algorithm and the convergence of the CIQPSO algorithm. In Section 5, we try to solve the GNEP by using the CIQPSO algorithm. Some numerical examples show that the algorithm is effective.
2. Model and Assumptions
In the section, we first introduce the model and some assumptions of the GNEP.
Let denote the index set of all players, denote the decision variables of all players, and indicate the control variable of each player . denotes all other players decision variables apart from the player , and we write that . Assume that the th player’s strategy set is , which relies on the strategy of all other players. is also called the th player’s feasible set, which is clearly defined by equality constraints or inequality constraints. Each player has a payoff function or utility function, where is related to both the player’s decision variable and other players’ decision variables . In general, for each player , some constraint functions are given bywhere is the selfconstraints of all players, which only relies on the player’s decision variable. indicates all players’ common constraints, which relies on all players’ decision variables. The dimensions of and are denoted as with and with , respectively.
If is the Nash equilibrium for the GNEP, then for any , , the equation is satisfied with the following optimization problem:
From now on, we assume that the following conditions are satisfied.
Assumption 1. (smoothness assumption and convexity assumption)(a)The functions , , and are twice differential with local continuous secondorder derivable(b)For each player , given any , the objective function is convex with respect to , and constraint functions and are convex
3. The GNEP, Nonlinear Complementarity Problem, and Nonlinear Equation Problem
3.1. The GNEP Is Converted into Nonlinear Complementarity Problem
First the GNEP is transformed to nonlinear complementarity problem by using the Karush–Kuhn–Tucker (KKT) condition. Therefore, if satisfies the appropriate constraint qualification, then there are multiplier vectors and to make meet the following KKT system [26]:where represents the Lagrangian equation of system (3), which is equivalent towhere means transpose. Thus, we know that any solution in system (3) is subject to the solution of equation (2). For the KKT system of all players, if is the GNEP’s solution, then there are multiplier vectors and to make meet the following system:where
Under the above assumptions, system (5) can be considered as the GNEP’s firstorder necessary condition. If the GNEP further satisfies Assumption 1 (b), then for any , formula (2) is transformed into a convex optimization problem, and system (5) becomes the GNEP’s sufficient condition. Let the satisfy Assumption 1, if meets system (5), then is the ’s equilibrium.
For , system (5) can be written as follows:
System (7) is a nonlinear complementarity problem, so the GNEP is converted into a nonlinear complementarity problem by using the KKT condition of the convex optimization problem.
3.2. The Nonlinear Complementarity Problem Is Converted into the Nonlinear Equation Problem
The key point in this paper is to transform system (7) into nonlinear equations by using the complement function method. If the formulaholds, then the function is called a complementarity function. This paper uses the Fischer–Burmeister (FB) function, that is,
Then, system (7) is equivalent to the following formula:
Accordingly, system (7) is equivalent to . Furthermore, the GNEP is transformed into the nonlinear equation problem by complementarity function. Finally, we design the CIQPSO algorithm to solve the GNEP by constructing a suitable fitness function in this paper.
In the CIQPSO algorithm, the fitness function is designed as follows:
Obviously, we can get from , which is equivalent to solve the minimum value of in the feasible constraint region, that is, the smaller the particle fitness function value, the more particles can adapt to the environment .
4. The Design of the CIQPSO Algorithm
4.1. Quantum Particle Swarm Optimization (QPSO) Algorithm
Kennedy and Eberhart [27] first proposed the idea of the particle swarm optimization (PSO) algorithm in 1995. The PSO algorithm has the properties of iterative optimization and biological intelligence. However, Bergh [28] proved that the PSO algorithm does not guarantee to global optimal or local optimal. In 2004, Sun et al. [29] proposed the quantumbehaved particle swarm optimization (QPSO) algorithm from the perspective of quantum mechanics. In the QPSO algorithm with the population size and the space dimension , the current position vector of particle is . Each particle’s position is affected by its local bestknown position and it also guided toward the global bestknown position in the feasible strategy search space. At the step, each particle’s individual best position and all particles’ global best position are updated using the following formulas:
The particle’s trajectory analyses indicated that each particle must converge to its local attractor , which is randomly generated between and . The coordinate can be calculated by the following equation:where , and the function denotes random numbers evenly distributed between 0 and 1.
denotes the average best position and is written by , that is,
In the quantum timespace model, the quantum state of a particle is represented by a wave function. From the view of classical dynamics, when particles move in the attractive potential field whose center is point , they must be in a bound state to avoid explosion and ensure convergence. In quantum bound state, the particle can appear anywhere with a certain probability as long as the probability density of the particle tends to be 0 when the particle distance from the center tends to be . In quantum space, particle’s position is updated with the Monte Carlo method, and the updated formula is as follows:where . denotes contractionexpansion coefficient. can adopt fixed value or linear reduction (see [30]). In this paper, the updated formula of is as follows:where and are the maximum and the minimum contractionexpansion coefficients, respectively. and are the maximum and the current iteration number, respectively.
4.2. The Design of the CIQPSO Algorithm
The CIQPSO algorithm is proposed by combining the QPSO algorithm and the immune PSO (IPSO) algorithm. The CIQPSO algorithm is to introduce immune memory function and the antibody density inhibition mechanism into the QPSO algorithm. The IPSO has the properties of antigen recognition, immune memory function, and the antibody density inhibition mechanism. Immune memory cells are obtained by preserving of each generation, and the similar memory cells are eliminated. The CIQPSO algorithm shares information and is coevolutionary among all particles. The objective function and constraints are regarded as antigens, and the GNEP’s solution is considered as an antibody (also called the particle). The particle continuously adjusts its current position to receive the global optimal solution by the particle’s current optimal position information and all particles’ optimal position information. The QPSO algorithm is easier to trap in the local optimal solution when particles are excessively centralized and avoid losing those particles that are of poor fitness but have a preferable evolutionary trend. Thus, we introduce the antibody’s probability density selection formula to keep the diversity of the particle in this paper. Antibodies consist of nonempty set , and the distance of the antibody is calculated by the following equation:
In [31], we can define the density of particle as follows:
Therefore, we can obtain the probability density selection function fromwhere and are the optimum position and fitness function value of the particle , respectively. Adding new member principally maintains the dynamical balance of swarm and plays the role to adjust swarm diversity. In particular, when the evolution swarm appears the bad diversity and the weak global search ability, the random generation of the selfantibody allows the swarm shift to the region with better solution.
4.3. Arithmetic Flow of the CIQPSO Algorithm
The design of the CIQPSO algorithm implementation steps is as follows: Step 1: parameter initialization. is the maximum iteration number, precision is , population size is , and contractionexpansion coefficient is ; randomly generate particles’ initial position . Step 2: compute the fitness value of each particle, if , then , otherwise . Each particle’s best current position is and all particles’ best current position is . Step 3: the average best position is calculated by formula (14). Step 4: use formula (15) to update the position of particle, and of each iteration is taken as memory cell particle to be placed into a memory library. Step 5: compute the fitness value of the updated particle, if , then , otherwise . If stop conditions are satisfied, go to Step 10, and otherwise, go to Step 6. Step 6: randomly generate particles and particles’ initial position . Step 7: we choose particles from particles by the probability density selection formula (19). Step 8: we select particles from the memory library to replace particles of poor fitness in the population and the immune system generates a new generation to perform the next iteration. Step 9: use formula (15) to calculate new positions of particles, and return to Step 2. Step 10: stopping condition. Whether the maximum iteration number or precision meets termination condition? If yes, stop evolution; otherwise return to Step 2.
The CIQPSO algorithm implementation process is as shown in Figure 1.
4.4. The Performance Evaluation of the CIQPSO Algorithm
The CIQPSO algorithm is a new type of bioevolving swarm intelligence algorithm, which is relevant to the evolution rules of genetic algorithm (GA), for instance, natural selection and survival of the fittest. Therefore, we use the quantitative analysis method [32] to test the convergence of the algorithm by using the offline performance of the algorithm.
Definition 1. Offline performance is defined as follows:where . In the algorithm iteration process, offline performance is the cumulative average of the best fitness values of each generation.
4.5. The Convergence of the CIQPSO Algorithm
Theorem 1 (see [26]). In dimensional space, the necessary and sufficient condition for particle ’s position to converge to its attractor with probability is that every 1dimensional coordinate converges to with probability.
Proof. Necessity: since , ,(1)When , ,is true, then is established.(2)When , so , and the inclusion relation of the following connection is true:therebyWe take the limit on both sides of the above equation altogether:We obtainwhich is .
Sufficiency: let , then for each , , we haveIf , we obtainand . Therefore,For both sides of the above equation, take the limit simultaneously:We obtainwhich is .
From Theorem 1, the convergence of the single particle’s position satisfying equation (15) can be summarized as the convergence of the particle in 1dimensional space. If Theorem 1 is satisfied, then it means the convergence of the CIQPSO algorithm. This also shows that the necessary and sufficient condition that the particle’s position converges to the attractor with probability is .
5. Numerical Experiments
For the sake of testing the performance of the CIQPSO algorithm, we set the parameters as follows: population size is and and the maximum number of iterations is . The fitness function precision of both Examples 1 and 2 is , and the fitness function precision of Example 3 is .
Example 1. Considering the GNEP from [33], the player’s decision variables are and and the payoff function of the players is given byThe offline performance and numerical results are shown in Table 1 and Figure 2, respectively.
We can derive the number of average iterations of five calculations is 94 from Table 1, and we can get the GNEP approximate solution of Example 1 is . Compared with the projectionlike methods [33], the CIQPSO algorithm not only decreases iterative number, but also the initial point can be randomly generated. The calculation process does not depend on the selection of the initial points. However, when the initial point in [33] is selected as (10, 0), the Nash equilibrium point cannot be calculated. In this paper, the average number of iterations is superior to 110 in [22], which shows that the algorithm is more timesaving than immune particle swarm optimization algorithm. In addition, we can know that the CIQPSO algorithm converges faster than the IPSO algorithm [22] from Figure 2, which means the CIQPSO algorithm has better convergence performance.

Example 2. Considering the GNEP from [26] and this generalized game has infinite solutions given by , , the player’s decision variables are and , and the payoff function of the players is given byThe offline performance and numerical results are shown in Table 2 and Figure 3, respectively.
We can derive the number of average iterations of five calculations is 35 from Table 2, and we can obtain the GNEP approximate solution of Example 2 is . In [26], Facchinei discussed the normalized equilibrium of Newton methods, but the existence of the GNEP solution can be lost. However, the CIQPSO algorithm can guarantee that some poorly fitness solutions not be lost by probability density selection function. Furthermore, the CIQPSO algorithm is faster than the IPSO algorithm in Figure 3, which indicates the fitness function value is smaller in the generalized Nash equilibrium point, and the convergence performance of the CIQPSO algorithm is effective.

Example 3. Considering the GNEP from [34], the player’s decision variables are and , and the payoff function of the players is given byWe use the CIQPSO algorithm to solve this problem, and the numerical results are shown in Table 3.
We can know that the number of average iterations is 2 from Table 3, and we can get the GNEP approximate solution of Example 3 is . Under the same precision, this paper can find the approximate Nash equilibrium solution only in 2 iterations which is better than the quasivariational inequalities penalty method in [34], in that the number of iterations is less, the calculation speed is faster, and it does not rely on the selection of the initial point of the algorithm. Thus, the example further illustrates the effectiveness of using swarm intelligence algorithm to solve the GNEP.

6. Conclusions
By some transformations of GNEP and the construction of appropriate fitness functions, we try to solve approximation solutions of the GNEP by using the CIQPSO algorithm. Some numerical examples illustrate that the algorithm is effective. The GNEP is transformed into the nonlinear equation problem in Section 3. The CIQPSO algorithm is designed and the algorithm is applied in solving the GNEP. What is more, with probability density selection function and quantum uncertainty principle, the convergence of the CIQPSO algorithm is demonstrated. Some numerical examples are given in this paper, Examples 1 and 2 prove that the CIQPSO algorithm is better than the IPSO algorithm, and its convergence speed is faster and not relies on the selection of the initial point. Example 2 proves that the CIQPSO algorithm is superior to the traditional Newton method for normalized equilibria, and the CIQPSO algorithm can guarantee that some poorly fitness solutions not be lost by probability density selection function. Example 3 proves that the CIQPSO algorithm is superior to the quasivariational inequalities penalty method, and the calculation is faster and more timesaving under the same precision. In summary, we know that the CIQPSO algorithm is superior to the immune particle swarm algorithm, the Newton method for normalized equilibria, or quasivariational inequalities penalty method, which means the algorithm has faster convergence and better offline performance. More specifically, the algorithm did not require highly smoothing of fitness function and the choice of the initial point, and it is also demonstrated that the CIQPSO algorithm is more effective in solving the GNEP. So far, it is not much known about the swarm intelligence algorithm to solve the GNEP, but this theme is well worth further study.
Data Availability
All the data used to support the findings of this study are available from the corresponding author upon request.
Conflicts of Interest
The authors declare that they have no conflicts of interest.
Acknowledgments
This work was supported by the National Natural Science Foundation of China (Grant no. 11561013) and Guizhou Science Foundation (Grant no. [2020]1Y284).
References
 G. Debreu, “A social equilibrium existence theorem,” Proceedings of the National Academy of Sciences, vol. 38, no. 10, pp. 886–893, 1952. View at: Publisher Site  Google Scholar
 G. Scutari, F. Facchinei, J.S. Pang, and D. P. Palomar, “Real and complex monotone communication games,” IEEE Transactions on Information Theory, vol. 60, no. 7, pp. 4197–4231, 2014. View at: Publisher Site  Google Scholar
 A. Dreves and M. Gerdts, “A generalized Nash equilibrium approach for optimal control problems of autonomous cars,” Optimal Control Applications and Methods, vol. 39, no. 1, pp. 326–342, 2018. View at: Publisher Site  Google Scholar
 T. Roughgarden, “Computing equilibria: a computational complexity perspective,” Economic Theory, vol. 42, no. 1, pp. 193–236, 2010. View at: Publisher Site  Google Scholar
 F. Facchinei and C. Kanzow, “Penalty methods for the solution of generalized Nash equilibrium problems,” SIAM Journal on Optimization, vol. 20, no. 5, pp. 2228–2253, 2010. View at: Publisher Site  Google Scholar
 K. J. Shah and T. Singh, “The modified homotopy algorithm for dispersion phenomena,” International Journal of Applied and Computational Mathematics, vol. 3, no. 1, pp. 785–799, 2017. View at: Publisher Site  Google Scholar
 D. Aussel, A. Sultana, and V. Vetrivel, “On the existence of projected solutions of quasivariational inequalities and generalized Nash equilibrium problems,” Journal of Optimization Theory and Applications, vol. 170, no. 3, pp. 818–837, 2016. View at: Publisher Site  Google Scholar
 T. Migot and M.G. Cojocaru, “A parametrized variational inequality approach to track the solution set of a generalized Nash equilibrium problem,” European Journal of Operational Research, vol. 283, no. 3, pp. 1136–1147, 2020. View at: Publisher Site  Google Scholar
 C. S. Lalitha and M. Dhingra, “Optimization reformulations of the generalized Nash equilibrium problem using regularized indicator Nikaido and Isoda function,” Journal of Global Optimization, vol. 57, no. 57, pp. 843–861, 2013. View at: Publisher Site  Google Scholar
 A. Dreves, “A bestresponse approach for equilibrium selection in twoplayer generalized Nash equilibrium problems,” Optimization, vol. 68, no. 12, pp. 2269–2295, 2019. View at: Publisher Site  Google Scholar
 A. F. Izmailov and M. V. Solodov, “On error bounds and Newtontype methods for generalized Nash equilibrium problems,” Computational Optimization and Applications, vol. 59, no. 12, pp. 201–218, 2014. View at: Publisher Site  Google Scholar
 A. E. Eiben and J. E. Smith, Introduction to Evolutionary Computing, SpringerVerlag, Berlin, Germany, 2nd edition, 2015.
 E. Wild, A Study of Heuristic Approaches for Solving Generalized Nash Equilibrium Problems and Related games, University of Guelph, Guelph, Canada, 2017.
 G. S. Jasmine and P. Vijayakumar, “Congestion management in deregulated power system using heuristic search algorithms incorporating wireless technology,” Wireless Personal Communications, vol. 94, pp. 2665–2680, 2017. View at: Publisher Site  Google Scholar
 H. Rahmanifard and T. Plaksina, “Application of artificial intelligence techniques in the petroleum industry: a review,” Artificial Intelligence Review, vol. 52, no. 4, pp. 2295–2318, 2019. View at: Publisher Site  Google Scholar
 S. P. Maruthi and T. Panigrahi, “Robust mixed source localization in WSN using swarm intelligence algorithms,” Digital Signal Processing, vol. 98, Article ID 102651, 2020. View at: Publisher Site  Google Scholar
 A. Gawanmeh, S. Parvin, and A. Alwadi, “A genetic algorithmic method for scheduling optimization in cloud computing services,” Arabian Journal for Science and Engineering, vol. 43, no. 12, pp. 6709–6718, 2018. View at: Publisher Site  Google Scholar
 D. Dorsch, H. T. Jongen, and V. Shikhman, “On structure and computation of generalized Nash equilibria,” SIAM Journal on Optimization, vol. 23, no. 1, pp. 452–474, 2013. View at: Publisher Site  Google Scholar
 C. Kanzow and D. Steck, “Augmented Lagrangian methods for the solution of generalized Nash equilibrium problems,” Siam Journal on Optimization, vol. 26, no. 4, pp. 2034–2058, 2016. View at: Publisher Site  Google Scholar
 J.S. Pang and M. Fukushima, “Quasivariational inequalities, generalized Nash equilibria, and multileaderfollower games,” Computational Management Science, vol. 2, no. 1, pp. 21–56, 2005. View at: Publisher Site  Google Scholar
 C. Kanzow, “On the multiplierpenaltyapproach for quasivariational inequalities,” Mathematical Programming, vol. 160, no. 12, pp. 33–63, 2016. View at: Publisher Site  Google Scholar
 W. S. Jia, S. W. Xiang, J. F. Yang, and J. H. He, “Solving generalized Nash equilibrium problem based on immune particle swarm,” Application Research of Computers, vol. 30, no. 9, pp. 2637–2640, 2013. View at: Google Scholar
 Y. Wang, W. G. Zhang, L. Fu et al., “Nash equilibrium strategies approach for aerial combat based on elite reelection particle swarm optimization,” Control Theory and Applications, vol. 32, no. 7, pp. 857–865, 2015. View at: Google Scholar
 A. Rezaee, “The Nash equilibrium point of dynamic games using evolutionary algorithms in linear dynamics and quadratic system,” Automatic Control and Computer Sciences, vol. 52, no. 2, pp. 109–117, 2018. View at: Publisher Site  Google Scholar
 Z. Wang, M. Jusup, L. Shi, J.H. Lee, Y. Iwasa, and S. Boccaletti, “Exploiting a cognitive bias promotes cooperation in social dilemma experiments,” Nature Communications, vol. 9, no. 1, 2018. View at: Publisher Site  Google Scholar
 F. Facchinei, A. Fisher, and V. Piccialli, “Generalized Nash equilibrium problems and Newton methods,” Mathematical Programming, vol. 117, no. 1, pp. 163–194, 2009. View at: Publisher Site  Google Scholar
 J. Kennedy and R. C. Eberhart, “Particle swarm optimization,” in Proceedings of the IEEE International Conference on Neural Networks, Piscataway, NJ, USA, 1995. View at: Google Scholar
 F. V. D. Bergh, An Analysis of Particle Swarm optimizers, University of Pretoria, Pretoria, South Africa, 2001.
 J. Sun, B. Feng, and W. B. Xu, “Particle swarm optimization with particles having quantum behavior,” in Proceedings of the 2004 Congress on Evolutionary Computation, Portland, OR, USA, June 2004. View at: Publisher Site  Google Scholar
 J. Sun, W. Fang, X. Wu, V. Palade, and W. Xu, “Quantumbehaved particle swarm optimization: analysis of individual particle behavior and parameter selection,” Evolutionary Computation, vol. 20, no. 3, pp. 349–393, 2012. View at: Publisher Site  Google Scholar
 G. Lu, D. Tan, and M. Zhao, “Improvement on regulating definition of antibody density of immune algorithm,” in Proceedings of the 9th International Conference on Neural Information Processing, Singapore, November 2002. View at: Publisher Site  Google Scholar
 K. A. Dejong, Analysis of the Behaviour of a Class of Genetic Adaptive system, University Michigan, Ann Arbor, MI, USA, 1975.
 J. Z. Zhang, B. Qu, and N. H. Xiu, “Some projectionlike methods for the generalized Nash equilibria,” Computational Optimization and Applications, vol. 45, no. 1, pp. 89–109, 2010. View at: Publisher Site  Google Scholar
 W. J. Shi, G. H. Chen, and Z. B. Zhu, “Using quasivariational inequalities penalty method to solve the generalized Nash equilibrium problems,” China Computer and Communication, vol. 19, no. 10, pp. 49–51, 2017. View at: Google Scholar
Copyright
Copyright © 2020 Luping Liu and Wensheng Jia. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.