Abstract

An elite quantum behaved particle swarm optimization (EQPSO) algorithm is proposed, in which an elite strategy is exerted for the global best particle to prevent premature convergence of the swarm. The EQPSO algorithm is employed for solving bilevel multiobjective programming problem (BLMPP) in this study, which has never been reported in other literatures. Finally, we use eight different test problems to measure and evaluate the proposed algorithm, including low dimension and high dimension BLMPPs, as well as attempt to solve the BLMPPs whose theoretical Pareto optimal front is not known. The experimental results show that the proposed algorithm is a feasible and efficient method for solving BLMPPs.

1. Introduction

Bilevel programming problem (BLPP) arises in a wide variety of scientific and engineering applications including optimal control, process optimization, game-playing strategy development, transportation problem, and so on. Thus the BLPP has been developed and researched by many scholars. The reviews, monographs, and surveys on the BLPP can refer to [111]. Moreover, the evolutionary algorithms (EAs) have been employed to address BLPP in papers [1216].

For the multiobjective characteristics widely existing in the BLPP, the bilevel multiobjective programming problem (BLMPP) has attracted many researchers to study it. For example, Shi and Xia [17, 18], Abo-Sinna and Baky [19], Nishizaki and Sakawa [20], Zheng et al. [21] presented an interactive algorithm for BLMPP. Eichfelder [22] presented a method for solving nonlinear bilevel multiobjective optimization problems with coupled upper level constraints. Thereafter, Eichfelder [23] developed a numerical method for solving nonlinear nonconvex bilevel multiobjective optimization problems. In recent years, the metaheuristic has attracted considerable attention as an alternative method for BLMPP. For example, Deb and Sinha [2426], as well as Sinha and Deb [27] discussed BLMPP based on evolutionary multiobjective optimization principles. Based on those studies, Deb and Sinha [28] proposed a viable and hybrid evolutionary-local-search based algorithm and presented challenging test problems. Sinha [29] presented a progressively interactive evolutionary multiobjective optimization method for BLMPP.

Particle swarm optimization (PSO) is a relatively novel heuristic algorithm inspired by the choreography of a bird flock, which has been found to be quite successful in a wide variety of optimization tasks [30]. Due to its high speed of convergence and relative simplicity, the PSO algorithm has been employed by many researchers for solving BLPPs. For example, Li et al. [31] proposed a hierarchical PSO for solving BLPP. Kuo and Huang [32] applied the PSO algorithm for solving bilevel linear programming problem. Gao et al. [33] presented a method to solve bilevel pricing problems in supply chains using PSO. However, it is worth noting that the papers mentioned above only for bilevel single objective problems and the BLMPP have seldom been studied using PSO so far. There are probably two reasons for this situation. One reason is that the added complexities associated with solving each level, and the other reason is that the global convergence of the PSO cannot be guaranteed [34].

In this paper, a global convergence guaranteed method called as EQPSO is proposed, in which an elite strategy is exerted for global best particle to prevent premature convergence of the swarm. The EQPSO is employed for solving the BLMPP in this study, which has not been reported in other literatures. For such problems, the proposed algorithm directly simulates the decision process of the bilevel programming, which is different from most traditional algorithms designed for specific versions or based on specific assumptions. The BLMPP is transformed to solve multiobjective optimization problems in the upper level and the lower level interactively by the EQPSO. And a set of approximate Pareto optimal solutions for BLMPP are obtained using the elite strategy. This interactive procedure is repeated until the accurate Pareto optimal solutions of the original problem are found. The rest of the paper is organized as follows. In Section 2, the problem formulation is provided. The proposed algorithm for solving bilevel multiobjective problem is presented in Section 3. In Section 4, some numerical examples are given to demonstrate the feasibility and efficiency of the proposed algorithm.

2. Problem Formulation

Let , , , , . The general model of the BLMPP can be written as follows: where and are the upper level and the lower level objective functions, respectively. and denote the upper level and the lower level constraints, respectively.

Let , , , and for the fixed , let denote the weak efficiency set of solutions to the lower level problem, the feasible solution set of problem (2.1) is denoted as: .

Definition 2.1. For a fixed , if is a Pareto optimal solution to the lower level problem, then is a feasible solution to the problem (2.1).

Definition 2.2. If is a feasible solution to the problem (2.1), and there are no , such that , then is a Pareto optimal solution to the problem (2.1), where “” denotes Pareto preference.

For problem (2.1), it is noted that a solution is feasible for the upper level problem if and only if is an optimal solution for the lower level problem with . In practice, we often make the approximate Pareto optimal solutions of the lower level problem as the optimal response feed back to the upper level problem, and this point of view is accepted usually. Based on this fact, the EQPSO algorithm may have a great potential for solving BLMPP. On the other hand, unlike the traditional point-by-point approach mentioned in Section 1, the EQPSO algorithm uses a group of points in its operation, thus the EQPSO can be developed as a new way for solving BLMPP. We next present the algorithm based on the EQPSO is presented for (2.1).

3. The Algorithm

3.1. The EQPSO

The quantum behaved particle swarm optimization (QPSO) is the integration of PSO and quantum computing theory developed by [3538]. Compared with PSO, it needs no velocity vectors for particles and has fewer parameters to adjust. Moreover, its global convergence can be guaranteed [39]. Due to its global convergence and relative simplicity, it has been found to be quite successful in a wide variety of optimization tasks. For example, a wide range of continuous optimization problems [4045] are solved by QPSO and the experiment results show that the QPSO works better than standard PSO. Some improved QPSO algorithms can refer to [4648]. In this paper, the EQPSO algorithm is proposed, in which an elite strategy is exerted for global best particle to prevent premature convergence of the swarm, and it makes the proposed algorithm has good performance for solving the high dimension BLMPPS. The EQPSO has the same design principle with the QPSO except for the global optimal particle selection criterion, so the global convergence proof of the EQPSO can refer to [39]. In the EQPSO, the particles move according to the following iterative equation: where where the denotes the particle’s position. denotes the mean best position of all the particles’ best positions. The , , and are random numbers distributed uniformly on , respectively. is the expansion-contraction coefficient. In general, , is the current iteration number, and is the maximum number of iterations. The and are the particle’s personal best position and the global best position, respectively. is the elite set which is introduced in following parts (see Algorithm: Step 3).

3.2. The Algorithm for Solving BLMPP

The process of the proposed algorithm for solving BLMPP is an interactive coevolutionary process. We first initialize population and then solve multiobjective optimization problems in the upper level and the lower level interactively using the EQPSO. For one time of iteration, a set of approximate Pareto optimal solutions for problem 1 is obtained by the elite strategy which was adopted in Deb et al. [49]. This interactive procedure is repeated until the accurate Pareto optimal solutions of problem (2.1) are found. The details of the proposed algorithm are given as follows.

Algorithm
Step  1. Initializing.
Step  1.1.  Initialize the population with particles which is composed by subswarms of size each. The particle’s position of the () subswarm is presented as: () and is sampled randomly in the feasible space.
Step  1.2.  Initialize the external loop counter .
Step  2.  For the subswarm, (), each particle is assigned a nondomination rank and a crowding value in space. Then, all resulting subswarms are combined into one population which is named as the . Afterwards, each particle is assigned a nondomination rank and a crowding value in space.
Step  3.  The nondomination particles assigned both and from are saved in the elite set .
Step  4.  For the subswarm, (), update the lower level decision variables.
Step  4.1.  Initialize the lower level loop counter .
Step  4.2.  Update the () particle’s position with the fixed according to (3.1) and (3.2).
Step  4.3.  .
Step  4.4.  If , go to Step 4.5. Otherwise, go to Step 4.2
Step  4.5.  Each particle of the subswarm is reassigned a nondomination rank and a crowding value in space. Then, all resulting subswarms are combined into one population which is renamed as the . Afterwards, each particle is reassigned a nondomination rank and a crowding value in space.
Step  5.  Combined population and to form . The combined population is reassigned a nondomination rank , and the particles within an identical nondomination rank are assigned a crowding distance value in the space.
Step  6.  Choose half particles from . The particles of rank are considered first. From the particles of rank , the particles with are noted one by one in the order of reducing crowding distance , for each such particle the corresponding subswarm from its source population (either or ) is copied in an intermediate population . If a subswarm is already copied in and a future particle from the same subswarm is found to have , the subswarm is not copied again. When all particles of are considered, a similar consideration is continued with and so on till exactly subswarms are copied in .
Step  7.  Update the elite set . The nondomination particles assigned both and from are saved in the elite set .
Step  8.  Update the upper level decision variables in .
Step  8.1.  Initiate the upper level loop counter .
Step  8.2.  Update the () particle’s position with the fixed according to (3.1) and (3.2).
Step  8.3.  .
Step  8.4.  If , go to Step 8.5. Otherwise, go to Step 8.2.
Step  8.5.  Every member is then assigned a nondomination rank and a crowding distance value in space.
Step  9.  .
Step  10.  If , output the elite set . Otherwise, go to Step 2.

In Steps 4 and 8, the global best position is chosen at random from the elite set . The criterion of personal best position choice is that, if the current position is dominated by the previous position, then the previous position is kept; otherwise, the current position replaces the previous one; if neither of them is dominated by the other, then we select one of them randomly. A relatively simple scheme is used to handle constraints. Whenever two individuals are compared, their constraints are checked. If both are feasible, nondomination sorting technology is directly applied to decide which one is selected. If one is feasible and the other is infeasible, the feasible dominates. If both are infeasible, then the one with the lowest amount of constraint violation dominates the other. Notations used in the proposed algorithm are detailed in Table 1.

4. Numerical Experiment

In this section, three examples will be considered to illustrate the feasibility of the proposed algorithm for problem (2.1). In order to evaluate the closeness between the obtained Pareto optimal front and the theoretical Pareto optimal front, as well as the diversity of the obtained Pareto optimal solutions along the theoretical Pareto optimal front, we adopted the following evaluation metrics.

4.1. Performance Evaluation Metrics

(a) Generational Distance (): this metric used by Deb [50] is employed in this paper as a way of evaluating the closeness between the obtained Pareto optimal front and the theoretical Pareto optimal front. The metric denotes the average distance between the obtained Pareto optimal front and the theoretical Pareto optimal front: where is the number of the obtained Pareto optimal solutions by the proposed algorithm and is the Euclidean distance between each obtained Pareto optimal solution and the nearest member of the theoretical Pareto optimal set.

(b) Spacing (): this metric is used to evaluate the diversity of the obtained Pareto optimal solutions by comparing the uniform distribution and the deviation of solutions as described by Deb [50]: where ,  ,   is the mean of all ,   is the Euclidean distance between the extreme solutions in obtained Pareto optimal solution set and the theoretical Pareto optimal solution set on the objective, is the number of the upper level objective function, and is the number of the obtained solutions by the proposed algorithm.

All results presented in this paper have been obtained on a personal computer (CPU: AMD Phenon II X6 1055T 2.80 GHz; RAM: 3.25 GB) using a c# implementation of the proposed algorithm.

4.2. Numerical Examples
4.2.1. Low Dimension BLMPPs

Example 4.1. Example 4.1 is taken from [22]. Here . In this example, the population size and iteration times are set as follows: , , , , and :
Figure 1 shows the obtained Pareto front of this example by the proposed algorithm. From Figure 1, it can be seen that the obtained Pareto front is very close to the theoretical Pareto optimal front, and the average distance between the obtained Pareto optimal front and the theoretical Pareto optimal front is , that is, (see Table 2). Moreover, the lower value (, see Table 2) shows that the proposed algorithm is able to obtain a good distribution of solutions on the entire range of the theoretical Pareto optimal front. Figure 2 shows the obtained solutions of this example, which follow the relationship, that is, , and . It is also obvious that all obtained solutions are close to being on the upper level constraint boundary ().

Example 4.2. Example 4.2 is taken from [51]. Here . In this example, the population size and iteration times are set as follows: , , , , and :
Figure 3 shows the obtained Pareto optimal front of this example by the proposed algorithm. From Figure 3, it is obvious that the obtained Pareto optimal front is very close to the theoretical Pareto optimal front, the average distance between the obtained Pareto optimal front and the theoretical Pareto optimal front is 0.00003 (see Table 2). On the other hand, the obtained Pareto optimal solutions can distribute uniformly on entire range of theoretical Pareto optimal front base on the fact that the value is lower (, see Table 2). Figure 4 shows the obtained Pareto optimal solutions, they follow the relationship, that is, and .

4.2.2. High Dimension BLMPPs

Example 4.3. Example 4.3 is taken from [28]. Here . In this example, the population size and iteration times are set as follows: , , , , and :

Example 4.4. Example 4.4 is taken from [28]. Here . In this example, the population size and iteration times are set as follows: , , , and . where
This problem is more difficult compared to the previous problems (Examples 4.1 and 4.2) because the lower level problem of this example has multimodalities, thereby making the lower level problem difficult in finding the upper level Pareto optimal front. From Figure 5, it can be seen that the obtained Pareto front is very close to the theoretical Pareto optimal front, and the average distance between the obtained Pareto optimal front and the theoretical Pareto optimal front is , that is, (see Table 2). Moreover, the lower value (, see Table 2) shows that the proposed algorithm is able to obtain a good distribution of solutions on the entire range of the theoretical Pareto optimal front. Furthermore, two obtained lower level Pareto optimal fronts are given when and .

Figure 6 shows the obtained Pareto front of Example 4.4 by the proposed algorithm. The upper level problem has multimodalities, thereby causing an algorithm difficulty in finding the upper level Pareto optimal front. From Figure 6, it can be seen that the obtained Pareto front is very close to the theoretical Pareto optimal front, and the average distance between the obtained Pareto optimal front and the theoretical Pareto optimal front is , that is, (see Table 2). Moreover, the lower value (, see Table 2) shows that the proposed algorithm is able to obtain a good distribution of solutions on the entire range of the theoretical Pareto optimal front. Furthermore, all corresponding lower level Pareto optimal fronts are given.

Example 4.5. Example 4.5 is taken from [28]. Here . In this example, the population size and iteration times are set as follows: , , , and :

Example 4.6. Example 4.6 is taken from [28]. Here . In this example, the population size and iteration times are set as follows: , , , and :
Figure 7 shows the obtained Pareto front of Example 4.5 by the proposed algorithm. From Figure 7, it can be seen that the obtained Pareto front is very close to the theoretical Pareto optimal front, and the average distance between the obtained Pareto optimal front and the theoretical Pareto optimal front is , that is, (see Table 2). Moreover, the lower value (, see Table 2) shows that the proposed algorithm is able to obtain a good distribution of solutions on the entire range of the theoretical Pareto optimal front. Furthermore, all obtained lower level Pareto optimal fronts are given. It is also obvious that the Pareto optimal fronts for both the lower and upper level lie on constraint boundaries and every lower level Pareto optimal front has an unequal contribution to the upper level Pareto optimal front.
Figure 8 shows the obtained Pareto front of Example 4.6 by the proposed algorithm. From Figure 8, it can be seen that the obtained Pareto front is very close to the theoretical Pareto optimal front, and the average distance between the obtained Pareto optimal front and the theoretical Pareto optimal front is , that is, (see Table 2). Moreover, the lower value (, see Table 2) shows that the proposed algorithm is able to obtain a good distribution of solutions on the entire range of the theoretical Pareto optimal front. Furthermore, three obtained lower level Pareto optimal fronts are given when , and . It can be seen that only one Pareto optimal point from each participating lower level problem qualifies to be on the upper level Pareto optimal front.

4.2.3. The BLMPPs with Unknown Theoretical Pareto Optimal Fronts

Example 4.7. Example 4.7 is taken from [52], in which the theoretical Pareto optimal front is not given. Here . In this example, the population size and iteration times are set as follows: , , , and :

Example 4.8. Example 4.8 is taken from [23]. Here . In this example, the population size and iteration times are set as follows: , , and :
Figure 9 shows the obtained Pareto optimal front of Example 4.7 by the proposed algorithm. Note that, Zhang et al. [52] only obtained a single optimal solution and which lies on the maximum of the using weighted sum method. In contrast, a set of Pareto optimal solutions are obtained by the proposed algorithm. However, the fact that the single optimal solution in [49] is included in the obtained Pareto optimal solutions illustrates the feasibility of proposed algorithm. Figure 10 shows the final archive solutions of the Example 4.8 by the proposed algorithm. For this problem, the exact Pareto optimal front is not known, but the obtained Pareto optimal front by the proposed algorithm is similar to that reported in the previous study [23].

5. Conclusion

In this paper, an EQPSO is presented, in which an elite strategy is exerted for global best particle to prevent the swarm from clustering, enabling the particle to escape the local optima. The EQPSO algorithm is employed for solving bilevel multiobjective programming problem (BLMPP) for the first time. In this study, some numerical examples are used to explore the feasibility and efficiency of the proposed algorithm. The experimental results indicate that the obtained Pareto front by the proposed algorithm is very close to the theoretical Pareto optimal front, and the solutions are also distributed uniformly on entire range of the theoretical Pareto optimal front. The proposed algorithm is simple and easy to implement, which provides another appealing method for further study on BLMPP.

Acknowledgments

The authors are indebted to the referees and the associate editor for their insightful and pertinent comments. This work is supported by the National Science Foundation of China (71171150, 71171151, 50979073, 61273179, 11201039, 20101304), the Academic Award for Excellent Ph.D. Candidates Funded by Wuhan University and the Fundamental Research Fund for the Central Universities (no. 201120102020004), and the Ph.D. short-time mobility program by Wuhan University.