Abstract

We try a new algorithm to solve the generalized Nash equilibrium problem (GNEP) in the paper. First, the GNEP is turned into the nonlinear complementarity problem by using the Karush–Kuhn–Tucker (KKT) condition. Then, the nonlinear complementarity problem is converted into the nonlinear equation problem by using the complementarity function method. For the nonlinear equation equilibrium problem, we design a coevolutionary immune quantum particle swarm optimization algorithm (CIQPSO) by involving the immune memory function and the antibody density inhibition mechanism into the quantum particle swarm optimization algorithm. Therefore, this algorithm has not only the properties of the immune particle swarm optimization algorithm, but also improves the abilities of iterative optimization and convergence speed. With the probability density selection and quantum uncertainty principle, the convergence of the CIQPSO algorithm is analyzed. Finally, some numerical experiment results indicate that the CIQPSO algorithm is superior to the immune particle swarm algorithm, the Newton method for normalized equilibrium, or the quasivariational inequalities penalty method. Furthermore, this algorithm also has faster convergence and better off-line performance.

1. Introduction

In 1952, Debreu [1] came up with the generalized Nash equilibrium problem (GNEP), in which each player’s feasible strategy set depends on the strategies of other players. In the GNEP, no player can obtain more benefits through a unilateral change in his strategy. At present, the GNEP has attracted growing concerns, and the GNEP was widely used in many fields such as economics, pricing currency option, evolutionary biology, computer science, artificial intelligence, and autonomous cars [2, 3]. However, Debreu and Arrow had not given an efficient algorithm for the solution of the GENP, and Roughgarden [4] indicated that the solution of the Nash equilibrium is NP-hard problem. The solution of the GNEP has attracted much attention: Facchinei and Kanzow [5] focused on penalty methods to solve the GNEP; Kunjan and Twinkle [6] showed a modified homotopy algorithm; Aussel [7] developed some characterizations of the GNEP as a quasivariational inequality; Migot and Cojocaru [8] presented variational inequality methods to track the solution set of the GNEP. Using the regularized indicator Nikaido–Isoda-type function, Lalitha and Dhingra [9] presented two constraint optimization formulas for the GNEP; Dreves [10] considered a linear GNEP and a best-response approach for equilibrium selection in two-player GNEP; Izmailov and Solodov [11] analyzed error bounds and Newton-type methods for the GNEP. The abovementioned approaches for solving the GNEP have the common feature that they convert the GNEP into variational inequalities, quasivariational inequalities, or nonlinear equations under certain assumptions, and then they use the appropriate optimization algorithms to solve them. Inspired by the above work, this paper considers a new algorithm for solving the GNEP. First, the GNEP is transformed into the nonlinear complementarity problem by Karush–Kuhn–Tucker (KKT) condition. Second, the nonlinear complementarity problem is transformed into the nonlinear equation system problem by the complementarity function method. Finally, we use the CIQPSO algorithm to solve the GNEP by constructing an appropriate fitness function and analyze and compare some numerical examples to show that the algorithm is effective.

In recent years, swarm intelligence algorithm has shown the certain potential possibility for solving NP-hard optimization problems [12, 13], and the swarm intelligence algorithm has many applied in power system management [14], petroleum industry [15], wireless sensor networks [16], and cloud computing multitask scheduling optimization [17] and so forth. It is well known that the GNEP usually has an infinite number of solutions, and the Nash equilibrium set is manifold if the players may share constraints in [18]. Kanzow and Steck [19] proposed an augmented Lagrangian algorithm to solve the GNEP, which they take into account the particular structure of GNEPs with suitable constraint qualifications. Pang and Fukushima [20] applied an augmented Lagrangian-type approach to quasivariational inequalities (QVIs). In [21], an improved version can be found for QVIs. With the development of the computation for Nash equilibrium, many scholars try to use the swarm intelligence algorithm from biological evolutionary process to solve Nash equilibrium problems, such as immune particle swarm algorithm [22] for solving Nash equilibrium of the GNEP, ant colony algorithm [23], and evolutionary algorithm [24] to attain with the Nash equilibrium of dynamic games and evolutionary games [25]. As mentioned above, it has become a new approach to realize Nash equilibrium by using the biological evolution and biological behavior law of swarm intelligence algorithm. Inspired by research works mentioned above, our paper mainly studies swarm intelligence algorithms to solve the GNEP. We presented a coevolutionary immune quantum particle swarm optimization (CIQPSO) algorithm by involving the immune memory function and the antibody density inhibition mechanism into the quantum particle swarm optimization algorithm and proved the convergence of the CIQPSO algorithm. The CIQPSO algorithm is able to improve global optimum capability. Moreover, the convergence of the CIQPSO algorithm is analyzed by introducing the probability density selection function. Meanwhile, some numerical examples show that the algorithm is effective. This paper is organized as follows. In Section 2, we introduce the model and assumptions of the GNEP. In Section 3, we consider the GNEP is transformed into the nonlinear equation problem through the Karush–Kuhn–Tucker (KKT) condition and the complementarity function method, respectively. In Section 4, we study the design of the CIQPSO algorithm and the convergence of the CIQPSO algorithm. In Section 5, we try to solve the GNEP by using the CIQPSO algorithm. Some numerical examples show that the algorithm is effective.

2. Model and Assumptions

In the section, we first introduce the model and some assumptions of the GNEP.

Let denote the index set of all players, denote the decision variables of all players, and indicate the control variable of each player . denotes all other players decision variables apart from the player , and we write that . Assume that the th player’s strategy set is , which relies on the strategy of all other players. is also called the th player’s feasible set, which is clearly defined by equality constraints or inequality constraints. Each player has a payoff function or utility function, where is related to both the player’s decision variable and other players’ decision variables . In general, for each player , some constraint functions are given bywhere is the self-constraints of all players, which only relies on the player’s decision variable. indicates all players’ common constraints, which relies on all players’ decision variables. The dimensions of and are denoted as with and with , respectively.

If is the Nash equilibrium for the GNEP, then for any , , the equation is satisfied with the following optimization problem:

From now on, we assume that the following conditions are satisfied.

Assumption 1. (smoothness assumption and convexity assumption)(a)The -functions , , and are twice differential with local continuous second-order derivable(b)For each player , given any , the objective function is convex with respect to , and constraint functions and are convex

3. The GNEP, Nonlinear Complementarity Problem, and Nonlinear Equation Problem

3.1. The GNEP Is Converted into Nonlinear Complementarity Problem

First the GNEP is transformed to nonlinear complementarity problem by using the Karush–Kuhn–Tucker (KKT) condition. Therefore, if satisfies the appropriate constraint qualification, then there are multiplier vectors and to make meet the following KKT system [26]:where represents the Lagrangian equation of system (3), which is equivalent towhere means transpose. Thus, we know that any solution in system (3) is subject to the solution of equation (2). For the KKT system of all players, if is the GNEP’s solution, then there are multiplier vectors and to make meet the following system:where

Under the above assumptions, system (5) can be considered as the GNEP’s first-order necessary condition. If the GNEP further satisfies Assumption 1 (b), then for any , formula (2) is transformed into a convex optimization problem, and system (5) becomes the GNEP’s sufficient condition. Let the satisfy Assumption 1, if meets system (5), then is the ’s equilibrium.

For , system (5) can be written as follows:

System (7) is a nonlinear complementarity problem, so the GNEP is converted into a nonlinear complementarity problem by using the KKT condition of the convex optimization problem.

3.2. The Nonlinear Complementarity Problem Is Converted into the Nonlinear Equation Problem

The key point in this paper is to transform system (7) into nonlinear equations by using the complement function method. If the formulaholds, then the function is called a complementarity function. This paper uses the Fischer–Burmeister (FB) function, that is,

Then, system (7) is equivalent to the following formula:

Accordingly, system (7) is equivalent to . Furthermore, the GNEP is transformed into the nonlinear equation problem by complementarity function. Finally, we design the CIQPSO algorithm to solve the GNEP by constructing a suitable fitness function in this paper.

In the CIQPSO algorithm, the fitness function is designed as follows:

Obviously, we can get from , which is equivalent to solve the minimum value of in the feasible constraint region, that is, the smaller the particle fitness function value, the more particles can adapt to the environment .

4. The Design of the CIQPSO Algorithm

4.1. Quantum Particle Swarm Optimization (QPSO) Algorithm

Kennedy and Eberhart [27] first proposed the idea of the particle swarm optimization (PSO) algorithm in 1995. The PSO algorithm has the properties of iterative optimization and biological intelligence. However, Bergh [28] proved that the PSO algorithm does not guarantee to global optimal or local optimal. In 2004, Sun et al. [29] proposed the quantum-behaved particle swarm optimization (QPSO) algorithm from the perspective of quantum mechanics. In the QPSO algorithm with the population size and the space dimension , the current position vector of particle is . Each particle’s position is affected by its local best-known position and it also guided toward the global best-known position in the feasible strategy search space. At the step, each particle’s individual best position and all particles’ global best position are updated using the following formulas:

The particle’s trajectory analyses indicated that each particle must converge to its local attractor , which is randomly generated between and . The coordinate can be calculated by the following equation:where , and the function denotes random numbers evenly distributed between 0 and 1.

denotes the average best position and is written by , that is,

In the quantum time-space model, the quantum state of a particle is represented by a wave function. From the view of classical dynamics, when particles move in the attractive potential field whose center is point , they must be in a bound state to avoid explosion and ensure convergence. In quantum bound state, the particle can appear anywhere with a certain probability as long as the probability density of the particle tends to be 0 when the particle distance from the center tends to be . In quantum space, particle’s position is updated with the Monte Carlo method, and the updated formula is as follows:where . denotes contraction-expansion coefficient. can adopt fixed value or linear reduction (see [30]). In this paper, the updated formula of is as follows:where and are the maximum and the minimum contraction-expansion coefficients, respectively. and are the maximum and the current iteration number, respectively.

4.2. The Design of the CIQPSO Algorithm

The CIQPSO algorithm is proposed by combining the QPSO algorithm and the immune PSO (IPSO) algorithm. The CIQPSO algorithm is to introduce immune memory function and the antibody density inhibition mechanism into the QPSO algorithm. The IPSO has the properties of antigen recognition, immune memory function, and the antibody density inhibition mechanism. Immune memory cells are obtained by preserving of each generation, and the similar memory cells are eliminated. The CIQPSO algorithm shares information and is coevolutionary among all particles. The objective function and constraints are regarded as antigens, and the GNEP’s solution is considered as an antibody (also called the particle). The particle continuously adjusts its current position to receive the global optimal solution by the particle’s current optimal position information and all particles’ optimal position information. The QPSO algorithm is easier to trap in the local optimal solution when particles are excessively centralized and avoid losing those particles that are of poor fitness but have a preferable evolutionary trend. Thus, we introduce the antibody’s probability density selection formula to keep the diversity of the particle in this paper. Antibodies consist of nonempty set , and the distance of the antibody is calculated by the following equation:

In [31], we can define the density of particle as follows:

Therefore, we can obtain the probability density selection function fromwhere and are the optimum position and fitness function value of the particle , respectively. Adding new member principally maintains the dynamical balance of swarm and plays the role to adjust swarm diversity. In particular, when the evolution swarm appears the bad diversity and the weak global search ability, the random generation of the self-antibody allows the swarm shift to the region with better solution.

4.3. Arithmetic Flow of the CIQPSO Algorithm

The design of the CIQPSO algorithm implementation steps is as follows:Step 1: parameter initialization. is the maximum iteration number, precision is , population size is , and contraction-expansion coefficient is ; randomly generate particles’ initial position .Step 2: compute the fitness value of each particle, if , then , otherwise . Each particle’s best current position is and all particles’ best current position is .Step 3: the average best position is calculated by formula (14).Step 4: use formula (15) to update the position of particle, and of each iteration is taken as memory cell particle to be placed into a memory library.Step 5: compute the fitness value of the updated particle, if , then , otherwise . If stop conditions are satisfied, go to Step 10, and otherwise, go to Step 6.Step 6: randomly generate particles and particles’ initial position .Step 7: we choose particles from particles by the probability density selection formula (19).Step 8: we select particles from the memory library to replace particles of poor fitness in the population and the immune system generates a new generation to perform the next iteration.Step 9: use formula (15) to calculate new positions of particles, and return to Step 2.Step 10: stopping condition. Whether the maximum iteration number or precision meets termination condition? If yes, stop evolution; otherwise return to Step 2.

The CIQPSO algorithm implementation process is as shown in Figure 1.

4.4. The Performance Evaluation of the CIQPSO Algorithm

The CIQPSO algorithm is a new type of bioevolving swarm intelligence algorithm, which is relevant to the evolution rules of genetic algorithm (GA), for instance, natural selection and survival of the fittest. Therefore, we use the quantitative analysis method [32] to test the convergence of the algorithm by using the off-line performance of the algorithm.

Definition 1. Off-line performance is defined as follows:where . In the algorithm iteration process, off-line performance is the cumulative average of the best fitness values of each generation.

4.5. The Convergence of the CIQPSO Algorithm

Theorem 1 (see [26]). In -dimensional space, the necessary and sufficient condition for particle ’s position to converge to its attractor with probability is that every 1-dimensional coordinate converges to with probability.

Proof. Necessity: since , ,(1)When , ,is true, then is established.(2)When , so , and the inclusion relation of the following connection is true:therebyWe take the limit on both sides of the above equation altogether:We obtainwhich is .
Sufficiency: let , then for each , , we haveIf , we obtainand . Therefore,For both sides of the above equation, take the limit simultaneously:We obtainwhich is .
From Theorem 1, the convergence of the single particle’s position satisfying equation (15) can be summarized as the convergence of the particle in 1-dimensional space. If Theorem 1 is satisfied, then it means the convergence of the CIQPSO algorithm. This also shows that the necessary and sufficient condition that the particle’s position converges to the attractor with probability is .

5. Numerical Experiments

For the sake of testing the performance of the CIQPSO algorithm, we set the parameters as follows: population size is and and the maximum number of iterations is . The fitness function precision of both Examples 1 and 2 is , and the fitness function precision of Example 3 is .

Example 1. Considering the GNEP from [33], the player’s decision variables are and and the payoff function of the players is given byThe off-line performance and numerical results are shown in Table 1 and Figure 2, respectively.
We can derive the number of average iterations of five calculations is 94 from Table 1, and we can get the GNEP approximate solution of Example 1 is . Compared with the projection-like methods [33], the CIQPSO algorithm not only decreases iterative number, but also the initial point can be randomly generated. The calculation process does not depend on the selection of the initial points. However, when the initial point in [33] is selected as (10, 0), the Nash equilibrium point cannot be calculated. In this paper, the average number of iterations is superior to 110 in [22], which shows that the algorithm is more time-saving than immune particle swarm optimization algorithm. In addition, we can know that the CIQPSO algorithm converges faster than the IPSO algorithm [22] from Figure 2, which means the CIQPSO algorithm has better convergence performance.

Example 2. Considering the GNEP from [26] and this generalized game has infinite solutions given by , , the player’s decision variables are and , and the payoff function of the players is given byThe off-line performance and numerical results are shown in Table 2 and Figure 3, respectively.
We can derive the number of average iterations of five calculations is 35 from Table 2, and we can obtain the GNEP approximate solution of Example 2 is . In [26], Facchinei discussed the normalized equilibrium of Newton methods, but the existence of the GNEP solution can be lost. However, the CIQPSO algorithm can guarantee that some poorly fitness solutions not be lost by probability density selection function. Furthermore, the CIQPSO algorithm is faster than the IPSO algorithm in Figure 3, which indicates the fitness function value is smaller in the generalized Nash equilibrium point, and the convergence performance of the CIQPSO algorithm is effective.

Example 3. Considering the GNEP from [34], the player’s decision variables are and , and the payoff function of the players is given byWe use the CIQPSO algorithm to solve this problem, and the numerical results are shown in Table 3.
We can know that the number of average iterations is 2 from Table 3, and we can get the GNEP approximate solution of Example 3 is . Under the same precision, this paper can find the approximate Nash equilibrium solution only in 2 iterations which is better than the quasivariational inequalities penalty method in [34], in that the number of iterations is less, the calculation speed is faster, and it does not rely on the selection of the initial point of the algorithm. Thus, the example further illustrates the effectiveness of using swarm intelligence algorithm to solve the GNEP.

6. Conclusions

By some transformations of GNEP and the construction of appropriate fitness functions, we try to solve approximation solutions of the GNEP by using the CIQPSO algorithm. Some numerical examples illustrate that the algorithm is effective. The GNEP is transformed into the nonlinear equation problem in Section 3. The CIQPSO algorithm is designed and the algorithm is applied in solving the GNEP. What is more, with probability density selection function and quantum uncertainty principle, the convergence of the CIQPSO algorithm is demonstrated. Some numerical examples are given in this paper, Examples 1 and 2 prove that the CIQPSO algorithm is better than the IPSO algorithm, and its convergence speed is faster and not relies on the selection of the initial point. Example 2 proves that the CIQPSO algorithm is superior to the traditional Newton method for normalized equilibria, and the CIQPSO algorithm can guarantee that some poorly fitness solutions not be lost by probability density selection function. Example 3 proves that the CIQPSO algorithm is superior to the quasivariational inequalities penalty method, and the calculation is faster and more time-saving under the same precision. In summary, we know that the CIQPSO algorithm is superior to the immune particle swarm algorithm, the Newton method for normalized equilibria, or quasivariational inequalities penalty method, which means the algorithm has faster convergence and better off-line performance. More specifically, the algorithm did not require highly smoothing of fitness function and the choice of the initial point, and it is also demonstrated that the CIQPSO algorithm is more effective in solving the GNEP. So far, it is not much known about the swarm intelligence algorithm to solve the GNEP, but this theme is well worth further study.

Data Availability

All the data used to support the findings of this study are available from the corresponding author upon request.

Conflicts of Interest

The authors declare that they have no conflicts of interest.

Acknowledgments

This work was supported by the National Natural Science Foundation of China (Grant no. 11561013) and Guizhou Science Foundation (Grant no. [2020]1Y284).