Table of Contents Author Guidelines Submit a Manuscript
Journal of Applied Mathematics
Volume 2012, Article ID 626717, 13 pages
http://dx.doi.org/10.1155/2012/626717
Research Article

An Improved Particle Swarm Optimization for Solving Bilevel Multiobjective Programming Problem

1State Key Laboratory of Water Resources and Hydropower Engineering Science, Wuhan University, Wuhan 430072, China
2School of Information and Mathematics, Yangtze University, Jingzhou 434023, China
3College of Mathematics and Computer Sciences, Huanggang Normal University, Huanggang 438000, China

Received 4 December 2011; Revised 21 January 2012; Accepted 5 February 2012

Academic Editor: Debasish Roy

Copyright © 2012 Tao Zhang et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Abstract

An improved particle swarm optimization (PSO) algorithm is proposed for solving bilevel multiobjective programming problem (BLMPP). For such problems, the proposed algorithm directly simulates the decision process of bilevel programming, which is different from most traditional algorithms designed for specific versions or based on specific assumptions. The BLMPP is transformed to solve multiobjective optimization problems in the upper level and the lower level interactively by an improved PSO. And a set of approximate Pareto optimal solutions for BLMPP is obtained using the elite strategy. This interactive procedure is repeated until the accurate Pareto optimal solutions of the original problem are found. Finally, some numerical examples are given to illustrate the feasibility of the proposed algorithm.

1. Introduction

Bilevel programming problem (BLPP) arises in a wide variety of scientific and engineering applications including optimal control, process optimization, game-playing strategy development, and transportation problem Thus, the BLPP has been developed and researched by many scholars. The reviews, monographs, and surveys on the BLPP can refer to [111]. Moreover, the evolutionary algorithms (EA) have been employed to address BLPP in papers [1216].

However, the bilevel multiobjective programming problem (BLMPP) has seldom been studied. Shi and Xia [17, 18], Abo-Sinna and Baky [19], Nishizaki and Sakawa [20], and Zheng et al. [21] presented an interactive algorithm for BLMPP. Eichfelder [22] presented a method for solving nonlinear bilevel multiobjective optimization problems with coupled upper level constraints. Thereafter, Eichfelder [23] developed a numerical method for solving nonlinear nonconvex bilevel multiobjective optimization problems. In recent years, the metaheuristic has attracted considerable attention as an alternative method for BLMPP. For example, Deb and Sinha [2426] as well as Sinha and Deb [27] discussed BLMPP based on evolutionary multiobjective optimization principles. Based on those studies, Deb and Sinha [28] proposed a viable and hybrid evolutionary-local-search-based algorithm and presented challenging test problems. Sinha [29] presented a progressively interactive evolutionary multiobjective optimization method for BLMPP.

Particle swarm optimization (PSO) is a relatively novel heuristic algorithm inspired by the choreography of a bird flock, which has been found to be quite successful in a wide variety of optimization tasks [30]. Due to its high speed of convergence and relative simplicity, the PSO algorithm has been employed by many researchers for solving BLPPs. For example, Li et al. [31] proposed a hierarchical PSO for solving BLPP. Kuo and Huang [32] applied the PSO algorithm for solving bilevel linear programming problem. Gao et al. [33] presented a method to solve bilevel pricing problems in supply chains using PSO. However, it is worth noting that the papers mentioned above are only for bilevel single objective problems.

In this paper, an improved PSO is presented for solving BLMPP. The algorithm can be outlined as follows. The BLMPP is transformed to solve multiobjective optimization problems in the upper level and the lower level interactively by an improved PSO. And a set of approximate Pareto optimal solutions for BLMPP is obtained using the elite strategy. The above interactive procedure is repeated for a predefined count, and then the accurate Pareto optimal solutions of the BLMPP will be achieved. Towards these ends, the rest of the paper is organized as follows. In Section 2, the problem formulation is provided. The proposed algorithm for solving bilevel multiobjective problem is presented in Section 3. In Section 4, some numerical examples are given to demonstrate the proposed algorithm, while the conclusion is reached in Section 5.

2. Problem Formulation

Let 𝑥𝑅𝑛1, 𝑦𝑅𝑛2, 𝐹𝑅𝑛1×𝑅𝑛2𝑅𝑚1, 𝑓𝑅𝑛1×𝑅𝑛2𝑅𝑚2, 𝐺𝑅𝑛1×𝑅𝑛2𝑅𝑝, and 𝑔𝑅𝑛1×𝑅𝑛2𝑅𝑞. The general model of the BLMPP can be written as follows: min𝑥𝐹(𝑥,𝑦)s.t.𝐺(𝑥,𝑦)0,min𝑦𝑓(𝑥,𝑦)s.t.𝑔(𝑥,𝑦)0,(2.1) where 𝐹(𝑥,𝑦) and 𝑓(𝑥,𝑦) are the upper level and the lower level objective functions, respectively. 𝐺(𝑥,𝑦) and 𝑔(𝑥,𝑦) denote the upper level and the lower level constraints, respectively. Let 𝑆={(𝑥,𝑦)𝐺(𝑥,𝑦)0,𝑔(𝑥,𝑦)0}, 𝑋={𝑥𝑦,𝐺(𝑥,𝑦)0,𝑔(𝑥,𝑦)0}, 𝑆(𝑥)={𝑦𝑔(𝑥,𝑦)0}, and for the fixed 𝑥𝑋, let 𝑆(𝑋) denote the weak efficiency set of solutions to the lower level problem, the feasible solution set of problem (2.1) is denoted as IR={(𝑥,𝑦)(𝑥,𝑦)𝑆,𝑦𝑆(𝑋)}.

Definition 2.1. For a fixed 𝑥𝑋, if 𝑦 is a Pareto optimal solution to the lower level problem, then (𝑥,𝑦) is a feasible solution to the problem (2.1).

Definition 2.2. If (𝑥,𝑦) is a feasible solution to the problem (2.1) and there are no (𝑥,𝑦)IR, such that 𝐹(𝑥,𝑦)𝐹(𝑥,𝑦), then (𝑥,𝑦) is a Pareto optimal solution to the problem (2.1), where “” denotes Pareto preference.

For problem (2.1), it is noted that a solution (𝑥,𝑦) is feasible for the upper level problem if and only if 𝑦 is an optimal solution for the lower level problem with 𝑥=𝑥. In practice, we often make the approximate Pareto optimal solutions of the lower level problem as the optimal response feedback to the upper level problem, and this point of view is accepted usually. Based on this fact, the PSO algorithm may have a great potential for solving BLMPP. On the other hand, unlike the traditional point-by-point approach mentioned in Section 1, the PSO algorithm uses a group of points in its operation thus, the PSO can be developed as a new way for solving BLMPP. In the following, we present an improved PSO algorithm for solving problem (2.1).

3. The Algorithm

The process of the proposed algorithm is an interactive coevolutionary process for both the upper level and the lower level. We first initialize population and then solve multiobjective optimization problems in the upper level and the lower level interactively using an improved PSO. Afterwards, a set of approximate Pareto optimal solutions for problem 1 is obtained by the elite strategy which was adopted in Deb et al. [34]. This interactive procedure is repeated until the accurate Pareto optimal solutions of problem (2.1) are found. The details of the proposed algorithm are given as follows:

3.1. Algorithm

Step 1. Initialize.Substep 1.1. Initialize the population 𝑃0 with 𝑁𝑢 particles which is composed by 𝑛𝑠=𝑁𝑢/𝑁𝑙 subswarms of size 𝑁𝑙 each. The particle’s position of the 𝑘th(𝑘=1,2,,𝑛𝑠) subswarm is presented as 𝑧𝑗=(𝑥𝑗,𝑦𝑗)(𝑗=1,2,,𝑛𝑙), and the corresponding velocity is presented as: 𝑣𝑗=(𝑣𝑥𝑗,𝑣𝑦𝑗)(𝑗=1,2,,𝑛𝑙), 𝑧𝑗 and 𝑣𝑗 are sampled randomly in the feasible space, respectively.Substep 1.2. Initialize the external loop counter 𝑡=0.

Step 2. For the 𝑘th subswarm (𝑘=1,2,,𝑛𝑠), each particle is assigned a nondomination rank ND𝑙 and a crowding value CD𝑙 in 𝑓 space. Then, all resulting subswarms are combined into one population which is named as the 𝑃𝑡. Afterwards, each particle is assigned a nondomination rank ND𝑢 and a crowding value CD𝑢 in 𝐹 space.

Step 3. The nondomination particles assigned both ND𝑢=1 and ND𝑙=1 from 𝑃𝑡 are saved in the elite set 𝐴𝑡.

Step 4. For the 𝑘th subswarm (𝑘=1,2,,𝑛𝑠), update the lower level decision variables.Substep 4.1. Initialize the lower level loop counter 𝑡𝑙=0.Substep 4.2. Update the 𝑗th (𝑗=1,2,,𝑁𝑙) particle’s position and velocity with the fixed 𝑥𝑗 and the fixed 𝑣𝑗 using 𝑣𝑡𝑙𝑦+1𝑗=𝑤𝑙𝑣𝑡𝑙𝑦𝑗+𝑐1𝑙𝑟1𝑙𝑝𝑝best𝑦𝑗𝑧𝑡𝑙𝑗+𝑐2𝑙𝑟2𝑙𝑝𝑔best𝑙𝑧𝑡𝑙𝑗,𝑧𝑡𝑙𝑗+1=𝑧𝑡𝑙𝑗+𝑣𝑡𝑙𝑦+1𝑗.(3.1)Substep 4.3. Consider  𝑡𝑙=𝑡𝑙+1.Substep 4.4. If 𝑡𝑙𝑇𝑙, go to Substep 4.5. Otherwise, go to Substep 4.2.Substep 4.5. Each particle of the 𝑖th subswarm is reassigned a nondomination rank ND𝑙 and a crowding value CD𝑙 in 𝐹 space. Then, all resulting subswarms are combined into one population which is renamed as the 𝑄𝑡. Afterwards, each particle is reassigned a nondomination rank ND𝑢 and a crowding value CD𝑢 in 𝐹 space.

Step 5. Combine population 𝑃𝑡 and 𝑄𝑡 to form 𝑅𝑡. The combined population 𝑅𝑡 is reassigned a nondomination rank ND𝑢, and the particles within an identical nondomination rank are assigned a crowding distance value CD𝑢 in the 𝐹 space.

Step 6. Choose half particles from 𝑅𝑡. The particles of rank ND𝑢=1 are considered first. From the particles of rank ND𝑢=1, the particles with ND𝑙=1 are noted one by one in the order of reducing crowding distance CD𝑢, for each such particle the corresponding subswarm from its source population (either 𝑃𝑡 or 𝑄𝑡) is copied in an intermediate population 𝑆𝑡. If a subswarm is already copied in 𝑆𝑡 and a future particle from the same subswarm is found to have ND𝑢=ND𝑙=1, the subswarm is not copied again. When all particles of ND𝑢=1 are considered, a similar consideration is continued with ND𝑢=2 and so on till exactly 𝑛𝑠 subswarms are copied in 𝑆𝑡.

Step 7. Update the elite set 𝐴𝑡. The nondomination particles assigned both ND𝑢=1 and ND𝑙=1 from 𝑆𝑡 are saved in the elite set 𝐴𝑡.

Step 8. Update the upper level decision variables in 𝑆𝑡.Substep 8.1. Initiate the upper level loop counter 𝑡𝑢=0.Substep 8.2. Update the 𝑖th(𝑖=1,2,,𝑁𝑢) particle’s position and velocity with the fixed 𝑦𝑖 and the fixed 𝑣𝑖 using 𝑣𝑡𝑢𝑥+1𝑖=𝑤𝑢𝑣𝑡𝑢𝑥𝑖+𝑐1𝑢𝑟1𝑢𝑝𝑝best𝑥𝑖𝑧𝑡𝑢𝑖+𝑐2𝑢𝑟2𝑢𝑝𝑔best𝑢𝑧𝑡𝑢𝑖,𝑧𝑡𝑢𝑖+1=𝑧𝑡𝑢𝑖+𝑣𝑡𝑢𝑥+1𝑖.(3.2)Substep 8.3. Consider  𝑡𝑢=𝑡𝑢+1.Substep 8.4. If 𝑡𝑢𝑇𝑢, go to Substep 8.5. Otherwise, go to Substep 8.2.Substep 8.5. Every member is then assigned a nondomination rank ND𝑢 and a crowding distance value CD𝑢 in 𝐹 space.

Step 9. Consider  𝑡=𝑡+1.

Step 10. If 𝑡𝑇, output the elite set 𝐴𝑡. Otherwise, go to Step 2.
In Steps 4 and 8, the global best position is chosen at random from the elite set 𝐴𝑡. The criterion of personal best position choice is that if the current position is dominated by the previous position, then the previous position is kept; otherwise, the current position replaces the previous one; if neither of them is dominated by the other, then we select one of them randomly. A relatively simple scheme is used to handle constraints. Whenever two individuals are compared, their constraints are checked. If both are feasible, nondomination sorting technology is directly applied to decide which one is selected. If one is feasible and the other is infeasible, the feasible dominates. If both are infeasible, then the one with the lowest amount of constraint violation dominates the other. Notations used in the proposed algorithm are detailed in Table 1.

tab1
Table 1: The notations of the algorithm.

4. Numerical Examples

In this section, three examples will be considered to illustrate the feasibility of the proposed algorithm for problem (2.1). In order to evaluate the closeness between the obtained Pareto optimal front and the theoretical Pareto optimal front, as well as the diversity of the obtained Pareto optimal solutions along the theoretical Pareto optimal front, we adopted the following evaluation metrics.

4.1. Generational Distance (GD)

This metric used by Deb [35] is employed in this paper as a way of evaluating the closeness between the obtained Pareto optimal front and the theoretical Pareto optimal front. The GD metric denotes the average distance between the obtained Pareto optimal front and the theoretical Pareto optimal front:GD=𝑛𝑖=1𝑑2𝑖𝑛,(4.1)

where 𝑛 is the number of the obtained Pareto optimal solutions by the proposed algorithm and 𝑑𝑖 is the Euclidean distance between each obtained Pareto optimal solution and the nearest member of the theoretical Pareto optimal set.

4.2. Spacing (SP)

This metric is used to evaluate the diversity of the obtained Pareto optimal solutions by comparing the uniform distribution and the deviation of solutions as described by Deb [35]:SP=𝑀𝑚=1𝑑𝑒𝑚+𝑛𝑖=1𝑑𝑑𝑖2𝑀𝑚=1𝑑𝑒𝑚+𝑛𝑑,(4.2)

where 𝑑𝑖=min𝑗(|𝐹𝑖1(𝑥,𝑦)𝐹𝑗1(𝑥,𝑦)|+|𝐹𝑖2(𝑥,𝑦)𝐹𝑗2(𝑥,𝑦)|), 𝑖,𝑗=1,2,,𝑛,𝑑 is the mean of all 𝑑𝑖, 𝑑𝑒𝑚 is the Euclidean distance between the extreme solutions in obtained Pareto optimal solution set and the theoretical Pareto optimal solution set on the 𝑚th objective, 𝑀 is the number of the upper level objective function, 𝑛 is the number of the obtained solutions by the proposed algorithm.

The PSO parameters are set as follows: 𝑟1𝑢,𝑟2𝑢,𝑟1𝑙,𝑟2𝑙random(0,1), the inertia weight 𝑤𝑢=𝑤𝑙=0.7298, and acceleration coefficients with 𝑐1𝑢=𝑐2𝑢=𝑐1𝑙=𝑐2𝑙=1.49618. All results presented in this paper have been obtained on a personal computer (CPU: AMD Phenom II X6 1055T 2.80 GHz; RAM: 3.25 GB) using a C# implementation of the proposed algorithm, and the figures were obtained using the origin 8.0.

Example 4.1. Example 4.1 is taken from [22]. Here 𝑥𝑅1, 𝑦𝑅2. In this example, the population size and iteration times are set as follows: 𝑁𝑢=200, 𝑇𝑢=200, 𝑁𝑙=40, 𝑇𝑙=40, and 𝑇=40: min𝑥𝐹𝑦(𝑥,𝑦)=1𝑥,𝑦2s.t.𝐺1(𝑦)=1+𝑦1+𝑦20min𝑦𝑦𝑓(𝑥,𝑦)=1,𝑦2s.t.𝑔1(𝑥,𝑦)=𝑥2𝑦21𝑦220,1𝑦1,𝑦21,0𝑥1.(4.3) Figure 1 shows the obtained Pareto front of this example by the proposed algorithm. From Figure 1, it can be seen that the obtained Pareto front is very close to the theoretical Pareto optimal front, and the average distance between the obtained Pareto optimal front and the theoretical Pareto optimal front is 0.00026, that is, GD=0.00026 (see Table 2). Moreover, the lower SP value (SP=0.17569, see Table 2) shows that the proposed algorithm is able to obtain a good distribution of solutions on the entire range of the theoretical Pareto optimal front. Figure 2 shows the obtained solutions of this example, which follow the relationship, that is, 𝑦1=1𝑦2, 𝑦2=1/2±(1/4)8𝑥24 and 𝑥(1/2,1). It is also obvious that all obtained solutions are close to being on the upper level constraint 𝐺(𝑥) boundary (1+𝑦1+𝑦2=0).

tab2
Table 2: Results of the Generation Distance (GD) and Spacing (SP) metrics for Examples 4.1 and 4.2.
626717.fig.001
Figure 1: The obtained Pareto optimal front of Example 4.1.
626717.fig.002
Figure 2: The obtained solutions of Example 4.1.

Example 4.2. Example 4.2 is taken from [36]. Here 𝑥𝑅1, 𝑦𝑅2. In this example, the population size and iteration times are set as follows: 𝑁𝑢=200, 𝑇𝑢=50, 𝑁𝑙=40, 𝑇𝑙=20, and 𝑇=40. min𝑥𝑥𝐹(𝑥,𝑦)=2+𝑦112+𝑦22,(𝑥1)2+𝑦112+𝑦22,min𝑦𝑦𝑓(𝑥,𝑦)=21+𝑦22,𝑦1𝑥2+𝑦22,1𝑥,𝑦1,𝑦22.(4.4) Figure 3 shows the obtained Pareto optimal front of this example by the proposed algorithm. From Figure 3, it is obvious that the obtained Pareto optimal front is very close to the theoretical Pareto optimal front, the average distance between the obtained Pareto optimal front and the theoretical Pareto optimal front is 0.00004 (see Table 2). On the other hand, the obtained Pareto optimal solutions can be distributed uniformly on entire range of theoretical Pareto optimal front based on the fact that the SP value is lower (SP=0.00173, see Table 2). Figure 4 shows the obtained Pareto optimal solutions; they follow the relationship, that is, 𝑥=𝑦1, 𝑦1[0.5,1] and 𝑦2=0.

626717.fig.003
Figure 3: The obtained Pareto optimal front of Example 4.2.
626717.fig.004
Figure 4: The obtained solutions of Example 4.2.

Example 4.3. Example 4.3 is taken from [37], in which the theoretical Pareto optimal front is not given. Here 𝑥𝑅2, 𝑦𝑅3. In this example, the population size and iteration times are set as follows: 𝑁𝑢=100, 𝑇𝑢=50, 𝑁𝑙=20, 𝑇𝑙=10, and 𝑇=40: max𝑥𝐹𝑥(𝑥,𝑦)=1+9𝑥2+10𝑦1+𝑦2+3𝑥3,9𝑥1+2𝑥2+2𝑦1+7𝑦2+4𝑥3,s.t.𝐺1(𝑥,𝑦)=3𝑥1+9𝑥2+9𝑦1+5𝑦2+3𝑦3𝐺1039,2(𝑥,𝑦)=4𝑥1𝑥2+3𝑦13𝑦2+2𝑦394,min𝑦𝑓(𝑥,𝑦)=4𝑥1+6𝑥2+7𝑦1+4𝑦2+8𝑦3,6𝑥1+4𝑥2+8𝑦1+7𝑦2+4𝑦3,s.t.𝑔1(𝑥,𝑦)=3𝑥19𝑥29𝑦14𝑦2𝑔61,2(𝑥,𝑦)=5𝑥1+9𝑥2+10𝑦1𝑦22𝑦3𝑔924,3(𝑥,𝑦)=3𝑥13𝑥2+𝑦2+5𝑦3𝑥420,1,𝑥2,𝑦1,𝑦2,𝑦30.(4.5) Figure 5 shows the obtained Pareto optimal front of Example 4.3 by the proposed algorithm. Figure 6 shows all five constrains for all obtained Pareto optimal solutions and it can be seen that the 𝐺1, 𝑔2 and 𝑔3 are active constrains. Note that, Zhang et al. [37] only obtained a single optimal solution 𝑥=(146.2955,28.9394), and 𝑦=(0,67.9318,0)which lies on the maximum of the 𝐹2 using weighted sum method. In contrast, a set of Pareto optimal solutions is obtained by the proposed algorithm. However, the fact that the single optimal solution in [37] is included in the obtained Pareto optimal solutions illustrates the feasibility of proposed algorithm.

626717.fig.005
Figure 5: The obtained Pareto optimal front of Example 4.3.
626717.fig.006
Figure 6: The constrains of Example 4.3.

5. Conclusion

In this paper, an improved PSO is presented for BLMPP. The BLMPP is transformed to solve the multiobjective optimization problems in the upper level and the lower level interactively using the proposed algorithm for a predefined count. And a set of accurate Pareto optimal solutions for BLMPP is obtained by the elite strategy. The experimental results illustrate that the obtained Pareto front by the proposed algorithm is very close to the theoretical Pareto optimal front, and the solutions are also distributed uniformly on entire range of the theoretical Pareto optimal front. Furthermore, the proposed algorithm is simple and easy to implement. It also provides another appealing method for further study on BLMPP.

Acknowledgment

This work is supported by the National Science Foundation of China (71171151, 50979073).

References

  1. B. Colson, P. Marcotte, and G. Savard, “An overview of bilevel optimization,” Annals of Operations Research, vol. 153, pp. 235–256, 2007. View at Publisher · View at Google Scholar · View at Zentralblatt MATH
  2. L. N. Vicente and P. H. Calamai, “Bilevel and multilevel programming: a bibliography review,” Journal of Global Optimization, vol. 5, no. 3, pp. 291–306, 1994. View at Publisher · View at Google Scholar · View at Zentralblatt MATH
  3. J. F. Bard, Practical Bilevel Optimization: Algorithms and Applications, vol. 30, Kluwer Academic, Dordrecht, The Netherlands, 1998.
  4. S. Dempe, Foundations of Bilevel Programming, vol. 61 of Nonconvex Optimization and its Applications, Kluwer Academic, Dordrecht, The Netherlands, 2002.
  5. S. Dempe, “Annotated bibliography on bilevel programming and mathematical programs with equilibrium constraints,” Optimization, vol. 52, no. 3, pp. 333–359, 2003. View at Publisher · View at Google Scholar · View at Zentralblatt MATH
  6. Z.-Q. Luo, J.-S. Pang, and D. Ralph, Mathematical Programs with Equilibrium Constraints, Cambridge University Press, Cambridge, UK, 1996.
  7. K. Shimizu, Y. Ishizuka, and J. F. Bard, Nondifferentiable and Two-Level Mathematical Programming, Kluwer Academic, Dordrecht, The Netherlands, 1997.
  8. B. Colson, P. Marcotte, and G. Savard, “Bilevel programming: a survey,” Quarterly Journal of Operations Research, vol. 3, no. 2, pp. 87–107, 2005. View at Publisher · View at Google Scholar · View at Zentralblatt MATH
  9. G. M. Wang, Z. P. Wan, and X. J. Wang, “Bibliography on bilevel programming,” Advances in Mathematics, vol. 36, no. 5, pp. 513–529, 2007 (Chinese). View at Google Scholar
  10. L. N. Vicente and P. H. Calamai, “Bilevel and multilevel programming: a bibliography review,” Journal of Global Optimization, vol. 5, no. 3, pp. 291–306, 1994. View at Publisher · View at Google Scholar · View at Zentralblatt MATH
  11. U. P. Wen and S. T. Hsu, “Linear bi-level programming problems. A review,” Journal of the Operational Research Society, vol. 42, no. 2, pp. 125–133, 1991. View at Google Scholar · View at Zentralblatt MATH · View at Scopus
  12. A. Koh, “Solving transportation bi-level programs with differential evolution,” in Proceedings of the IEEE Congress on Evolutionary Computation (CEC '07), pp. 2243–2250, September 2007. View at Publisher · View at Google Scholar · View at Scopus
  13. R. Mathieu, L. Pittard, and G. Anandalingam, “Genetic algorithm based approach to bilevel linear programming,” RAIRO Recherche Opérationnelle, vol. 28, no. 1, pp. 1–21, 1994. View at Google Scholar · View at Zentralblatt MATH
  14. V. Oduguwa and R. Roy, “Bi-level optimization using genetic algorithm,” in Proceedings of the IEEE International Conference on Artificial Intelligence Systems, pp. 322–327, 2002.
  15. Y. Wang, Y. C. Jiao, and H. Li, “An evolutionary algorithm for solving nonlinear bilevel programming based on a new constraint-handling scheme,” IEEE Transactions on Systems, Man and Cybernetics Part C, vol. 35, no. 2, pp. 221–232, 2005. View at Publisher · View at Google Scholar · View at Scopus
  16. Y. Yin, “Genetic-algorithms-based approach for bilevel programming models,” Journal of Transportation Engineering, vol. 126, no. 2, pp. 115–119, 2000. View at Publisher · View at Google Scholar · View at Scopus
  17. X. Shi and H. Xia, “Interactive bilevel multi-objective decision making,” Journal of the Operational Research Society, vol. 48, no. 9, pp. 943–949, 1997. View at Google Scholar · View at Zentralblatt MATH · View at Scopus
  18. X. Shi and H. Xia, “Model and interactive algorithm of bi-level multi-objective decision-making with multiple interconnected decision makers,” Journal of Multi-Criteria Decision Analysis, vol. 10, pp. 27–34, 2001. View at Publisher · View at Google Scholar · View at Zentralblatt MATH
  19. M. A. Abo-Sinna and I. A. Baky, “Interactive balance space approach for solving multi-level multi-objective programming problems,” Information Sciences, vol. 177, no. 16, pp. 3397–3410, 2007. View at Publisher · View at Google Scholar · View at Scopus
  20. I. Nishizaki and M. Sakawa, “Stackelberg solutions to multiobjective two-level linear programming problems,” Journal of Optimization Theory and Applications, vol. 103, no. 1, pp. 161–182, 1999. View at Publisher · View at Google Scholar · View at Zentralblatt MATH
  21. Y. Zheng, Z. Wan, and G. Wang, “A fuzzy interactive method for a class of bilevel multiobjective programming problem,” Expert Systems with Applications, vol. 38, pp. 10384–10388, 2011. View at Publisher · View at Google Scholar
  22. G. Eichfelder, “Solving nonlinear multi-objective bi-level optimization problems with coupled upper level constraints,” Technical Report 320, Institute of Applied Mathematics, University Erlangen-Nrnberg, Germany, 2007. View at Google Scholar
  23. G. Eichfelder, “Multiobjective bilevel optimization,” Mathematical Programming. A Publication of the Mathematical Programming Society, vol. 123, no. 2, pp. 419–449, 2010. View at Publisher · View at Google Scholar · View at Zentralblatt MATH
  24. K. Deb and A. Sinha, “Constructing test problems for bilevel evolutionary multi-objective optimization,” in Proceedings of the IEEE Congress on Evolutionary Computation (CEC '09), pp. 1153–1160, May 2009. View at Publisher · View at Google Scholar · View at Scopus
  25. K. Deb and A. Sinha, “Solving bilevel multiobjective optimization problems using evolutionary algorithms,” in Proceedings of the 5th International Conference on Evolutionary Multi-Criterion Optimization (EMO '09), vol. 5467 of Lecture Notes in Computer Science, pp. 110–124, 2009.
  26. K. Deb and A. Sinha, “An evolutionary approach for bilevel multiobjective problems,” in Cutting-Edge Research Topics on Multiple Criteria Decision Making, vol. 35 of Communications in Computer and Information Science, pp. 17–24, Berlin, Germany, 2009. View at Publisher · View at Google Scholar
  27. A. Sinha and K. Deb, “Towards understanding evolutionary bilevel multiobjective optimization algorithm,” Technical Report 2008006, Department of Mechanical Engineering, Indian Institute of Technology Kanpur, Kanpur, India, 2008. View at Google Scholar
  28. K. Deb and A. Sinha, “An efficient and accurate solution methodology for bilevel multi-objective programming problems using a hybrid evolutionary local-search algorithm,” Technical Report 2009001, 2009. View at Google Scholar
  29. A. Sinha, “Bilevel multi-objective optimization problem solving using progressively interactive EMO,” in Proceedings of the 6th International Conference on Evolutionary Multi-Criterion Optimization (EMO '11), vol. 6576, pp. 269–284, 2011. View at Publisher · View at Google Scholar
  30. J. Kennedy, R. Eberhart, and Y. Shi, Swarm Intelligence, Morgan Kaufmann, San Francisco, Calif, USA, 2001.
  31. X. Li, P. Tian, and X. Min, “A hierarchical particle swarm optimization for solving bilevel programming problems,” in Proceedings of the 8th International Conference on Artificial Intelligence and Soft Computing (ICAISC '06), vol. 4029 of Lecture Notes in Computer Science, pp. 1169–1178, 2006. View at Publisher · View at Google Scholar
  32. R. J. Kuo and C. C. Huang, “Application of particle swarm optimization algorithm for solving bi-level linear programming problem,” Computers & Mathematics with Applications, vol. 58, no. 4, pp. 678–685, 2009. View at Publisher · View at Google Scholar · View at Zentralblatt MATH
  33. Y. Gao, G. Zhang, J. Lu, and H. M. Wee, “Particle swarm optimization for bi-level pricing problems in supply chains,” Journal of Global Optimization, vol. 51, pp. 245–254, 2011. View at Publisher · View at Google Scholar · View at Scopus
  34. K. Deb, A. Pratap, S. Agarwal, and T. Meyarivan, “A fast and elitist multiobjective genetic algorithm: NSGA-II,” IEEE Transactions on Evolutionary Computation, vol. 6, no. 2, pp. 182–197, 2002. View at Publisher · View at Google Scholar · View at Scopus
  35. K. Deb, “Multi-objective optimization using evolutionary algorithms,” IEEE Transactions on Evolutionary Computation, vol. 6, pp. 182–197, 2002. View at Publisher · View at Google Scholar
  36. K. Deb and A. Sinha, “Solving bilevel multi-objective optimization problems using evolutionary algorithms,” Kangal Report, 2008. View at Google Scholar
  37. G. Zhang, J. Liu, and T. Dillon, “Decentralized multi-objective bilevel decision making with fuzzy demands,” Knowledge-Based Systems, vol. 20, pp. 495–507, 2007. View at Publisher · View at Google Scholar