Journal of Applied Mathematics

Journal of Applied Mathematics / 2012 / Article

Research Article | Open Access

Volume 2012 |Article ID 626717 | https://doi.org/10.1155/2012/626717

Tao Zhang, Tiesong Hu, Yue Zheng, Xuning Guo, "An Improved Particle Swarm Optimization for Solving Bilevel Multiobjective Programming Problem", Journal of Applied Mathematics, vol. 2012, Article ID 626717, 13 pages, 2012. https://doi.org/10.1155/2012/626717

An Improved Particle Swarm Optimization for Solving Bilevel Multiobjective Programming Problem

Academic Editor: Debasish Roy
Received04 Dec 2011
Revised21 Jan 2012
Accepted05 Feb 2012
Published10 Apr 2012

Abstract

An improved particle swarm optimization (PSO) algorithm is proposed for solving bilevel multiobjective programming problem (BLMPP). For such problems, the proposed algorithm directly simulates the decision process of bilevel programming, which is different from most traditional algorithms designed for specific versions or based on specific assumptions. The BLMPP is transformed to solve multiobjective optimization problems in the upper level and the lower level interactively by an improved PSO. And a set of approximate Pareto optimal solutions for BLMPP is obtained using the elite strategy. This interactive procedure is repeated until the accurate Pareto optimal solutions of the original problem are found. Finally, some numerical examples are given to illustrate the feasibility of the proposed algorithm.

1. Introduction

Bilevel programming problem (BLPP) arises in a wide variety of scientific and engineering applications including optimal control, process optimization, game-playing strategy development, and transportation problem Thus, the BLPP has been developed and researched by many scholars. The reviews, monographs, and surveys on the BLPP can refer to [1–11]. Moreover, the evolutionary algorithms (EA) have been employed to address BLPP in papers [12–16].

However, the bilevel multiobjective programming problem (BLMPP) has seldom been studied. Shi and Xia [17, 18], Abo-Sinna and Baky [19], Nishizaki and Sakawa [20], and Zheng et al. [21] presented an interactive algorithm for BLMPP. Eichfelder [22] presented a method for solving nonlinear bilevel multiobjective optimization problems with coupled upper level constraints. Thereafter, Eichfelder [23] developed a numerical method for solving nonlinear nonconvex bilevel multiobjective optimization problems. In recent years, the metaheuristic has attracted considerable attention as an alternative method for BLMPP. For example, Deb and Sinha [24–26] as well as Sinha and Deb [27] discussed BLMPP based on evolutionary multiobjective optimization principles. Based on those studies, Deb and Sinha [28] proposed a viable and hybrid evolutionary-local-search-based algorithm and presented challenging test problems. Sinha [29] presented a progressively interactive evolutionary multiobjective optimization method for BLMPP.

Particle swarm optimization (PSO) is a relatively novel heuristic algorithm inspired by the choreography of a bird flock, which has been found to be quite successful in a wide variety of optimization tasks [30]. Due to its high speed of convergence and relative simplicity, the PSO algorithm has been employed by many researchers for solving BLPPs. For example, Li et al. [31] proposed a hierarchical PSO for solving BLPP. Kuo and Huang [32] applied the PSO algorithm for solving bilevel linear programming problem. Gao et al. [33] presented a method to solve bilevel pricing problems in supply chains using PSO. However, it is worth noting that the papers mentioned above are only for bilevel single objective problems.

In this paper, an improved PSO is presented for solving BLMPP. The algorithm can be outlined as follows. The BLMPP is transformed to solve multiobjective optimization problems in the upper level and the lower level interactively by an improved PSO. And a set of approximate Pareto optimal solutions for BLMPP is obtained using the elite strategy. The above interactive procedure is repeated for a predefined count, and then the accurate Pareto optimal solutions of the BLMPP will be achieved. Towards these ends, the rest of the paper is organized as follows. In Section 2, the problem formulation is provided. The proposed algorithm for solving bilevel multiobjective problem is presented in Section 3. In Section 4, some numerical examples are given to demonstrate the proposed algorithm, while the conclusion is reached in Section 5.

2. Problem Formulation

Let 𝑥∈𝑅𝑛1, 𝑦∈𝑅𝑛2, 𝐹∶𝑅𝑛1×𝑅𝑛2→𝑅𝑚1, 𝑓∶𝑅𝑛1×𝑅𝑛2→𝑅𝑚2, 𝐺∶𝑅𝑛1×𝑅𝑛2→𝑅𝑝, and 𝑔∶𝑅𝑛1×𝑅𝑛2â†’ğ‘…ğ‘ž. The general model of the BLMPP can be written as follows: min𝑥𝐹(𝑥,𝑦)s.t.𝐺(𝑥,𝑦)≥0,min𝑦𝑓(𝑥,𝑦)s.t.𝑔(𝑥,𝑦)≥0,(2.1) where 𝐹(𝑥,𝑦) and 𝑓(𝑥,𝑦) are the upper level and the lower level objective functions, respectively. 𝐺(𝑥,𝑦) and 𝑔(𝑥,𝑦) denote the upper level and the lower level constraints, respectively. Let 𝑆={(𝑥,𝑦)𝐺(𝑥,𝑦)≥0,𝑔(𝑥,𝑦)≥0}, 𝑋={𝑥∣∃𝑦,𝐺(𝑥,𝑦)≥0,𝑔(𝑥,𝑦)≥0}, 𝑆(𝑥)={𝑦∣𝑔(𝑥,𝑦)≥0}, and for the fixed 𝑥∈𝑋, let 𝑆(𝑋) denote the weak efficiency set of solutions to the lower level problem, the feasible solution set of problem (2.1) is denoted as IR={(𝑥,𝑦)∣(𝑥,𝑦)∈𝑆,𝑦∈𝑆(𝑋)}.

Definition 2.1. For a fixed 𝑥∈𝑋, if 𝑦 is a Pareto optimal solution to the lower level problem, then (𝑥,𝑦) is a feasible solution to the problem (2.1).

Definition 2.2. If (𝑥∗,𝑦∗) is a feasible solution to the problem (2.1) and there are no (𝑥,𝑦)∈IR, such that 𝐹(𝑥,𝑦)≺𝐹(𝑥∗,𝑦∗), then (𝑥∗,𝑦∗) is a Pareto optimal solution to the problem (2.1), where “≺” denotes Pareto preference.

For problem (2.1), it is noted that a solution (𝑥∗,𝑦∗) is feasible for the upper level problem if and only if 𝑦∗ is an optimal solution for the lower level problem with 𝑥=𝑥∗. In practice, we often make the approximate Pareto optimal solutions of the lower level problem as the optimal response feedback to the upper level problem, and this point of view is accepted usually. Based on this fact, the PSO algorithm may have a great potential for solving BLMPP. On the other hand, unlike the traditional point-by-point approach mentioned in Section 1, the PSO algorithm uses a group of points in its operation thus, the PSO can be developed as a new way for solving BLMPP. In the following, we present an improved PSO algorithm for solving problem (2.1).

3. The Algorithm

The process of the proposed algorithm is an interactive coevolutionary process for both the upper level and the lower level. We first initialize population and then solve multiobjective optimization problems in the upper level and the lower level interactively using an improved PSO. Afterwards, a set of approximate Pareto optimal solutions for problem 1 is obtained by the elite strategy which was adopted in Deb et al. [34]. This interactive procedure is repeated until the accurate Pareto optimal solutions of problem (2.1) are found. The details of the proposed algorithm are given as follows:

3.1. Algorithm

Step 1. Initialize.Substep 1.1. Initialize the population 𝑃0 with 𝑁𝑢 particles which is composed by 𝑛𝑠=𝑁𝑢/𝑁𝑙 subswarms of size 𝑁𝑙 each. The particle’s position of the 𝑘th(𝑘=1,2,…,𝑛𝑠) subswarm is presented as 𝑧𝑗=(𝑥𝑗,𝑦𝑗)(𝑗=1,2,…,𝑛𝑙), and the corresponding velocity is presented as: 𝑣𝑗=(𝑣𝑥𝑗,𝑣𝑦𝑗)(𝑗=1,2,…,𝑛𝑙), 𝑧𝑗 and 𝑣𝑗 are sampled randomly in the feasible space, respectively.Substep 1.2. Initialize the external loop counter 𝑡∶=0.

Step 2. For the 𝑘th subswarm (𝑘=1,2,…,𝑛𝑠), each particle is assigned a nondomination rank ND𝑙 and a crowding value CD𝑙 in 𝑓 space. Then, all resulting subswarms are combined into one population which is named as the 𝑃𝑡. Afterwards, each particle is assigned a nondomination rank ND𝑢 and a crowding value CD𝑢 in 𝐹 space.

Step 3. The nondomination particles assigned both ND𝑢=1 and ND𝑙=1 from 𝑃𝑡 are saved in the elite set 𝐴𝑡.

Step 4. For the 𝑘th subswarm (𝑘=1,2,…,𝑛𝑠), update the lower level decision variables.Substep 4.1. Initialize the lower level loop counter 𝑡𝑙∶=0.Substep 4.2. Update the 𝑗th (𝑗=1,2,…,𝑁𝑙) particle’s position and velocity with the fixed 𝑥𝑗 and the fixed 𝑣𝑗 using 𝑣𝑡𝑙𝑦+1𝑗=𝑤𝑙𝑣𝑡𝑙𝑦𝑗+𝑐1𝑙𝑟1𝑙𝑝𝑝best𝑦𝑗−𝑧𝑡𝑙𝑗+𝑐2𝑙𝑟2𝑙𝑝𝑔best𝑙−𝑧𝑡𝑙𝑗,𝑧𝑡𝑙𝑗+1=𝑧𝑡𝑙𝑗+𝑣𝑡𝑙𝑦+1𝑗.(3.1)Substep 4.3. Consider  𝑡𝑙∶=𝑡𝑙+1.Substep 4.4. If 𝑡𝑙≥𝑇𝑙, go to Substep 4.5. Otherwise, go to Substep 4.2.Substep 4.5. Each particle of the 𝑖th subswarm is reassigned a nondomination rank ND𝑙 and a crowding value CD𝑙 in 𝐹 space. Then, all resulting subswarms are combined into one population which is renamed as the 𝑄𝑡. Afterwards, each particle is reassigned a nondomination rank ND𝑢 and a crowding value CD𝑢 in 𝐹 space.

Step 5. Combine population 𝑃𝑡 and 𝑄𝑡 to form 𝑅𝑡. The combined population 𝑅𝑡 is reassigned a nondomination rank ND𝑢, and the particles within an identical nondomination rank are assigned a crowding distance value CD𝑢 in the 𝐹 space.

Step 6. Choose half particles from 𝑅𝑡. The particles of rank ND𝑢=1 are considered first. From the particles of rank ND𝑢=1, the particles with ND𝑙=1 are noted one by one in the order of reducing crowding distance CD𝑢, for each such particle the corresponding subswarm from its source population (either 𝑃𝑡 or 𝑄𝑡) is copied in an intermediate population 𝑆𝑡. If a subswarm is already copied in 𝑆𝑡 and a future particle from the same subswarm is found to have ND𝑢=ND𝑙=1, the subswarm is not copied again. When all particles of ND𝑢=1 are considered, a similar consideration is continued with ND𝑢=2 and so on till exactly 𝑛𝑠 subswarms are copied in 𝑆𝑡.

Step 7. Update the elite set 𝐴𝑡. The nondomination particles assigned both ND𝑢=1 and ND𝑙=1 from 𝑆𝑡 are saved in the elite set 𝐴𝑡.

Step 8. Update the upper level decision variables in 𝑆𝑡.Substep 8.1. Initiate the upper level loop counter 𝑡𝑢∶=0.Substep 8.2. Update the 𝑖th(𝑖=1,2,…,𝑁𝑢) particle’s position and velocity with the fixed 𝑦𝑖 and the fixed 𝑣𝑖 using 𝑣𝑡𝑢𝑥+1𝑖=𝑤𝑢𝑣𝑡𝑢𝑥𝑖+𝑐1𝑢𝑟1𝑢𝑝𝑝best𝑥𝑖−𝑧𝑡𝑢𝑖+𝑐2𝑢𝑟2𝑢𝑝𝑔best𝑢−𝑧𝑡𝑢𝑖,𝑧𝑡𝑢𝑖+1=𝑧𝑡𝑢𝑖+𝑣𝑡𝑢𝑥+1𝑖.(3.2)Substep 8.3. Consider  𝑡𝑢∶=𝑡𝑢+1.Substep 8.4. If 𝑡𝑢≥𝑇𝑢, go to Substep 8.5. Otherwise, go to Substep 8.2.Substep 8.5. Every member is then assigned a nondomination rank ND𝑢 and a crowding distance value CD𝑢 in 𝐹 space.

Step 9. Consider  𝑡∶=𝑡+1.

Step 10. If 𝑡≥𝑇, output the elite set 𝐴𝑡. Otherwise, go to Step 2.
In Steps 4 and 8, the global best position is chosen at random from the elite set 𝐴𝑡. The criterion of personal best position choice is that if the current position is dominated by the previous position, then the previous position is kept; otherwise, the current position replaces the previous one; if neither of them is dominated by the other, then we select one of them randomly. A relatively simple scheme is used to handle constraints. Whenever two individuals are compared, their constraints are checked. If both are feasible, nondomination sorting technology is directly applied to decide which one is selected. If one is feasible and the other is infeasible, the feasible dominates. If both are infeasible, then the one with the lowest amount of constraint violation dominates the other. Notations used in the proposed algorithm are detailed in Table 1.


𝑥 𝑖 The 𝑖 t h particle’s position of the upper level problem.
𝑣 𝑥 𝑖 The velocity of 𝑥 𝑖 .
𝑦 𝑗 The 𝑗 t h particle’s position of the lower level problem.
𝑣 𝑦 𝑗 The velocity of 𝑦 𝑗 .
𝑧 𝑗 The 𝑗 t h particle’s position of BLMPP.
𝑝 𝑝 b e s t 𝑦 𝑗 The 𝑗 t h particle’s personal best position for the lower level problem.
𝑝 𝑝 b e s t 𝑥 𝑖 The 𝑖 t h particle’s personal best position for the upper level problem.
𝑝 𝑔 b e s t 𝑙 The particle’s global best position for the lower level problem.
𝑝 𝑢 𝑔 𝑏 𝑒 𝑠 𝑡 The particle’s global best position for the upper level problem.
𝑁 𝑢 The population size of the upper level problem.
𝑁 𝑙 The subswarm size of the lower level problem.
𝑡 Current iteration number for the overall problem.
𝑇 The predefined max iteration number for 𝑡 .
𝑡 𝑢 Current iteration number for the upper level problem.
𝑡 𝑙 Current iteration number for the lower level problem.
𝑇 𝑢 The predefined max iteration number for 𝑡 𝑢 .
𝑇 𝑙 The predefined max iteration number for 𝑡 𝑙 .
𝑤 𝑢 Inertia weights for the upper level problem.
𝑤 𝑙 Inertia weights the lower level problem.
𝑐 1 𝑢 The cognitive learning rate for the upper level problem.
𝑐 2 𝑢 The social learning rate for the upper level problem.
𝑐 1 𝑙 The cognitive learning rate for the lower level problem.
𝑐 2 𝑙 The social learning rate for the lower level problem.
N D 𝑢 Nondomination sorting rank of the upper level problem.
C D 𝑢 Crowding distance value of the upper level problem.
N D 𝑙 Nondomination sorting rank of the lower level problem.
C D 𝑙 Crowding distance value of the lower level problem.
𝑃 𝑡 The 𝑡 t h iteration population.
𝑄 𝑡 The offspring of 𝑃 𝑡 .
𝑆 𝑡 Intermediate population.

4. Numerical Examples

In this section, three examples will be considered to illustrate the feasibility of the proposed algorithm for problem (2.1). In order to evaluate the closeness between the obtained Pareto optimal front and the theoretical Pareto optimal front, as well as the diversity of the obtained Pareto optimal solutions along the theoretical Pareto optimal front, we adopted the following evaluation metrics.

4.1. Generational Distance (GD)

This metric used by Deb [35] is employed in this paper as a way of evaluating the closeness between the obtained Pareto optimal front and the theoretical Pareto optimal front. The GD metric denotes the average distance between the obtained Pareto optimal front and the theoretical Pareto optimal front:GD=∑𝑛𝑖=1𝑑2𝑖𝑛,(4.1)

where 𝑛 is the number of the obtained Pareto optimal solutions by the proposed algorithm and 𝑑𝑖 is the Euclidean distance between each obtained Pareto optimal solution and the nearest member of the theoretical Pareto optimal set.

4.2. Spacing (SP)

This metric is used to evaluate the diversity of the obtained Pareto optimal solutions by comparing the uniform distribution and the deviation of solutions as described by Deb [35]:∑SP=𝑀𝑚=1𝑑𝑒𝑚+∑𝑛𝑖=1𝑑−𝑑𝑖2∑𝑀𝑚=1𝑑𝑒𝑚+𝑛𝑑,(4.2)

where 𝑑𝑖=min𝑗(|𝐹𝑖1(𝑥,𝑦)−𝐹𝑗1(𝑥,𝑦)|+|𝐹𝑖2(𝑥,𝑦)−𝐹𝑗2(𝑥,𝑦)|), 𝑖,𝑗=1,2,…,𝑛,𝑑 is the mean of all 𝑑𝑖, 𝑑𝑒𝑚 is the Euclidean distance between the extreme solutions in obtained Pareto optimal solution set and the theoretical Pareto optimal solution set on the 𝑚th objective, 𝑀 is the number of the upper level objective function, 𝑛 is the number of the obtained solutions by the proposed algorithm.

The PSO parameters are set as follows: 𝑟1𝑢,𝑟2𝑢,𝑟1𝑙,𝑟2𝑙∈random(0,1), the inertia weight 𝑤𝑢=𝑤𝑙=0.7298, and acceleration coefficients with 𝑐1𝑢=𝑐2𝑢=𝑐1𝑙=𝑐2𝑙=1.49618. All results presented in this paper have been obtained on a personal computer (CPU: AMD Phenom II X6 1055T 2.80 GHz; RAM: 3.25 GB) using a C# implementation of the proposed algorithm, and the figures were obtained using the origin 8.0.

Example 4.1. Example 4.1 is taken from [22]. Here 𝑥∈𝑅1, 𝑦∈𝑅2. In this example, the population size and iteration times are set as follows: 𝑁𝑢=200, 𝑇𝑢=200, 𝑁𝑙=40, 𝑇𝑙=40, and 𝑇=40: min𝑥𝐹𝑦(𝑥,𝑦)=1−𝑥,𝑦2s.t.𝐺1(𝑦)=1+𝑦1+𝑦2≥0min𝑦𝑦𝑓(𝑥,𝑦)=1,𝑦2s.t.𝑔1(𝑥,𝑦)=𝑥2−𝑦21−𝑦22≥0,−1≤𝑦1,𝑦2≤1,0≤𝑥≤1.(4.3) Figure 1 shows the obtained Pareto front of this example by the proposed algorithm. From Figure 1, it can be seen that the obtained Pareto front is very close to the theoretical Pareto optimal front, and the average distance between the obtained Pareto optimal front and the theoretical Pareto optimal front is 0.00026, that is, GD=0.00026 (see Table 2). Moreover, the lower SP value (SP=0.17569, see Table 2) shows that the proposed algorithm is able to obtain a good distribution of solutions on the entire range of the theoretical Pareto optimal front. Figure 2 shows the obtained solutions of this example, which follow the relationship, that is, 𝑦1=−1−𝑦2, 𝑦2√=−1/2±(1/4)8𝑥2−4 and √𝑥∈(1/2,1). It is also obvious that all obtained solutions are close to being on the upper level constraint 𝐺(𝑥) boundary (1+𝑦1+𝑦2=0).


Example GD SP

Example 4.1 0.000260.17569
Example 4.2 0.000040.00173

Example 4.2. Example 4.2 is taken from [36]. Here 𝑥∈𝑅1, 𝑦∈𝑅2. In this example, the population size and iteration times are set as follows: 𝑁𝑢=200, 𝑇𝑢=50, 𝑁𝑙=40, 𝑇𝑙=20, and 𝑇=40. min𝑥𝑥𝐹(𝑥,𝑦)=2+𝑦1−12+𝑦22,(𝑥−1)2+𝑦1−12+𝑦22,min𝑦𝑦𝑓(𝑥,𝑦)=21+𝑦22,𝑦1−𝑥2+𝑦22,−1≤𝑥,𝑦1,𝑦2≤2.(4.4) Figure 3 shows the obtained Pareto optimal front of this example by the proposed algorithm. From Figure 3, it is obvious that the obtained Pareto optimal front is very close to the theoretical Pareto optimal front, the average distance between the obtained Pareto optimal front and the theoretical Pareto optimal front is 0.00004 (see Table 2). On the other hand, the obtained Pareto optimal solutions can be distributed uniformly on entire range of theoretical Pareto optimal front based on the fact that the SP value is lower (SP=0.00173, see Table 2). Figure 4 shows the obtained Pareto optimal solutions; they follow the relationship, that is, 𝑥=𝑦1, 𝑦1∈[0.5,1] and 𝑦2=0.

Example 4.3. Example 4.3 is taken from [37], in which the theoretical Pareto optimal front is not given. Here 𝑥∈𝑅2, 𝑦∈𝑅3. In this example, the population size and iteration times are set as follows: 𝑁𝑢=100, 𝑇𝑢=50, 𝑁𝑙=20, 𝑇𝑙=10, and 𝑇=40: max𝑥𝐹𝑥(𝑥,𝑦)=1+9𝑥2+10𝑦1+𝑦2+3𝑥3,9𝑥1+2𝑥2+2𝑦1+7𝑦2+4𝑥3,s.t.𝐺1(𝑥,𝑦)=3𝑥1+9𝑥2+9𝑦1+5𝑦2+3𝑦3𝐺≤1039,2(𝑥,𝑦)=−4𝑥1−𝑥2+3𝑦1−3𝑦2+2𝑦3≤94,min𝑦𝑓(𝑥,𝑦)=4𝑥1+6𝑥2+7𝑦1+4𝑦2+8𝑦3,6𝑥1+4𝑥2+8𝑦1+7𝑦2+4𝑦3,s.t.𝑔1(𝑥,𝑦)=3𝑥1−9𝑥2−9𝑦1−4𝑦2𝑔≤61,2(𝑥,𝑦)=5𝑥1+9𝑥2+10𝑦1−𝑦2−2𝑦3𝑔≤924,3(𝑥,𝑦)=3𝑥1−3𝑥2+𝑦2+5𝑦3𝑥≤420,1,𝑥2,𝑦1,𝑦2,𝑦3≥0.(4.5) Figure 5 shows the obtained Pareto optimal front of Example 4.3 by the proposed algorithm. Figure 6 shows all five constrains for all obtained Pareto optimal solutions and it can be seen that the 𝐺1, 𝑔2 and 𝑔3 are active constrains. Note that, Zhang et al. [37] only obtained a single optimal solution 𝑥=(146.2955,28.9394), and 𝑦=(0,67.9318,0)which lies on the maximum of the 𝐹2 using weighted sum method. In contrast, a set of Pareto optimal solutions is obtained by the proposed algorithm. However, the fact that the single optimal solution in [37] is included in the obtained Pareto optimal solutions illustrates the feasibility of proposed algorithm.

5. Conclusion

In this paper, an improved PSO is presented for BLMPP. The BLMPP is transformed to solve the multiobjective optimization problems in the upper level and the lower level interactively using the proposed algorithm for a predefined count. And a set of accurate Pareto optimal solutions for BLMPP is obtained by the elite strategy. The experimental results illustrate that the obtained Pareto front by the proposed algorithm is very close to the theoretical Pareto optimal front, and the solutions are also distributed uniformly on entire range of the theoretical Pareto optimal front. Furthermore, the proposed algorithm is simple and easy to implement. It also provides another appealing method for further study on BLMPP.

Acknowledgment

This work is supported by the National Science Foundation of China (71171151, 50979073).

References

  1. B. Colson, P. Marcotte, and G. Savard, “An overview of bilevel optimization,” Annals of Operations Research, vol. 153, pp. 235–256, 2007. View at: Publisher Site | Google Scholar | Zentralblatt MATH
  2. L. N. Vicente and P. H. Calamai, “Bilevel and multilevel programming: a bibliography review,” Journal of Global Optimization, vol. 5, no. 3, pp. 291–306, 1994. View at: Publisher Site | Google Scholar | Zentralblatt MATH
  3. J. F. Bard, Practical Bilevel Optimization: Algorithms and Applications, vol. 30, Kluwer Academic, Dordrecht, The Netherlands, 1998.
  4. S. Dempe, Foundations of Bilevel Programming, vol. 61 of Nonconvex Optimization and its Applications, Kluwer Academic, Dordrecht, The Netherlands, 2002.
  5. S. Dempe, “Annotated bibliography on bilevel programming and mathematical programs with equilibrium constraints,” Optimization, vol. 52, no. 3, pp. 333–359, 2003. View at: Publisher Site | Google Scholar | Zentralblatt MATH
  6. Z.-Q. Luo, J.-S. Pang, and D. Ralph, Mathematical Programs with Equilibrium Constraints, Cambridge University Press, Cambridge, UK, 1996.
  7. K. Shimizu, Y. Ishizuka, and J. F. Bard, Nondifferentiable and Two-Level Mathematical Programming, Kluwer Academic, Dordrecht, The Netherlands, 1997.
  8. B. Colson, P. Marcotte, and G. Savard, “Bilevel programming: a survey,” Quarterly Journal of Operations Research, vol. 3, no. 2, pp. 87–107, 2005. View at: Publisher Site | Google Scholar | Zentralblatt MATH
  9. G. M. Wang, Z. P. Wan, and X. J. Wang, “Bibliography on bilevel programming,” Advances in Mathematics, vol. 36, no. 5, pp. 513–529, 2007 (Chinese). View at: Google Scholar
  10. L. N. Vicente and P. H. Calamai, “Bilevel and multilevel programming: a bibliography review,” Journal of Global Optimization, vol. 5, no. 3, pp. 291–306, 1994. View at: Publisher Site | Google Scholar | Zentralblatt MATH
  11. U. P. Wen and S. T. Hsu, “Linear bi-level programming problems. A review,” Journal of the Operational Research Society, vol. 42, no. 2, pp. 125–133, 1991. View at: Google Scholar | Zentralblatt MATH
  12. A. Koh, “Solving transportation bi-level programs with differential evolution,” in Proceedings of the IEEE Congress on Evolutionary Computation (CEC '07), pp. 2243–2250, September 2007. View at: Publisher Site | Google Scholar
  13. R. Mathieu, L. Pittard, and G. Anandalingam, “Genetic algorithm based approach to bilevel linear programming,” RAIRO Recherche Opérationnelle, vol. 28, no. 1, pp. 1–21, 1994. View at: Google Scholar | Zentralblatt MATH
  14. V. Oduguwa and R. Roy, “Bi-level optimization using genetic algorithm,” in Proceedings of the IEEE International Conference on Artificial Intelligence Systems, pp. 322–327, 2002. View at: Google Scholar
  15. Y. Wang, Y. C. Jiao, and H. Li, “An evolutionary algorithm for solving nonlinear bilevel programming based on a new constraint-handling scheme,” IEEE Transactions on Systems, Man and Cybernetics Part C, vol. 35, no. 2, pp. 221–232, 2005. View at: Publisher Site | Google Scholar
  16. Y. Yin, “Genetic-algorithms-based approach for bilevel programming models,” Journal of Transportation Engineering, vol. 126, no. 2, pp. 115–119, 2000. View at: Publisher Site | Google Scholar
  17. X. Shi and H. Xia, “Interactive bilevel multi-objective decision making,” Journal of the Operational Research Society, vol. 48, no. 9, pp. 943–949, 1997. View at: Google Scholar | Zentralblatt MATH
  18. X. Shi and H. Xia, “Model and interactive algorithm of bi-level multi-objective decision-making with multiple interconnected decision makers,” Journal of Multi-Criteria Decision Analysis, vol. 10, pp. 27–34, 2001. View at: Publisher Site | Google Scholar | Zentralblatt MATH
  19. M. A. Abo-Sinna and I. A. Baky, “Interactive balance space approach for solving multi-level multi-objective programming problems,” Information Sciences, vol. 177, no. 16, pp. 3397–3410, 2007. View at: Publisher Site | Google Scholar
  20. I. Nishizaki and M. Sakawa, “Stackelberg solutions to multiobjective two-level linear programming problems,” Journal of Optimization Theory and Applications, vol. 103, no. 1, pp. 161–182, 1999. View at: Publisher Site | Google Scholar | Zentralblatt MATH
  21. Y. Zheng, Z. Wan, and G. Wang, “A fuzzy interactive method for a class of bilevel multiobjective programming problem,” Expert Systems with Applications, vol. 38, pp. 10384–10388, 2011. View at: Publisher Site | Google Scholar
  22. G. Eichfelder, “Solving nonlinear multi-objective bi-level optimization problems with coupled upper level constraints,” Technical Report 320, Institute of Applied Mathematics, University Erlangen-Nrnberg, Germany, 2007. View at: Google Scholar
  23. G. Eichfelder, “Multiobjective bilevel optimization,” Mathematical Programming. A Publication of the Mathematical Programming Society, vol. 123, no. 2, pp. 419–449, 2010. View at: Publisher Site | Google Scholar | Zentralblatt MATH
  24. K. Deb and A. Sinha, “Constructing test problems for bilevel evolutionary multi-objective optimization,” in Proceedings of the IEEE Congress on Evolutionary Computation (CEC '09), pp. 1153–1160, May 2009. View at: Publisher Site | Google Scholar
  25. K. Deb and A. Sinha, “Solving bilevel multiobjective optimization problems using evolutionary algorithms,” in Proceedings of the 5th International Conference on Evolutionary Multi-Criterion Optimization (EMO '09), vol. 5467 of Lecture Notes in Computer Science, pp. 110–124, 2009. View at: Google Scholar
  26. K. Deb and A. Sinha, “An evolutionary approach for bilevel multiobjective problems,” in Cutting-Edge Research Topics on Multiple Criteria Decision Making, vol. 35 of Communications in Computer and Information Science, pp. 17–24, Berlin, Germany, 2009. View at: Publisher Site | Google Scholar
  27. A. Sinha and K. Deb, “Towards understanding evolutionary bilevel multiobjective optimization algorithm,” Technical Report 2008006, Department of Mechanical Engineering, Indian Institute of Technology Kanpur, Kanpur, India, 2008. View at: Google Scholar
  28. K. Deb and A. Sinha, “An efficient and accurate solution methodology for bilevel multi-objective programming problems using a hybrid evolutionary local-search algorithm,” Technical Report 2009001, 2009. View at: Google Scholar
  29. A. Sinha, “Bilevel multi-objective optimization problem solving using progressively interactive EMO,” in Proceedings of the 6th International Conference on Evolutionary Multi-Criterion Optimization (EMO '11), vol. 6576, pp. 269–284, 2011. View at: Publisher Site | Google Scholar
  30. J. Kennedy, R. Eberhart, and Y. Shi, Swarm Intelligence, Morgan Kaufmann, San Francisco, Calif, USA, 2001.
  31. X. Li, P. Tian, and X. Min, “A hierarchical particle swarm optimization for solving bilevel programming problems,” in Proceedings of the 8th International Conference on Artificial Intelligence and Soft Computing (ICAISC '06), vol. 4029 of Lecture Notes in Computer Science, pp. 1169–1178, 2006. View at: Publisher Site | Google Scholar
  32. R. J. Kuo and C. C. Huang, “Application of particle swarm optimization algorithm for solving bi-level linear programming problem,” Computers & Mathematics with Applications, vol. 58, no. 4, pp. 678–685, 2009. View at: Publisher Site | Google Scholar | Zentralblatt MATH
  33. Y. Gao, G. Zhang, J. Lu, and H. M. Wee, “Particle swarm optimization for bi-level pricing problems in supply chains,” Journal of Global Optimization, vol. 51, pp. 245–254, 2011. View at: Publisher Site | Google Scholar
  34. K. Deb, A. Pratap, S. Agarwal, and T. Meyarivan, “A fast and elitist multiobjective genetic algorithm: NSGA-II,” IEEE Transactions on Evolutionary Computation, vol. 6, no. 2, pp. 182–197, 2002. View at: Publisher Site | Google Scholar
  35. K. Deb, “Multi-objective optimization using evolutionary algorithms,” IEEE Transactions on Evolutionary Computation, vol. 6, pp. 182–197, 2002. View at: Publisher Site | Google Scholar
  36. K. Deb and A. Sinha, “Solving bilevel multi-objective optimization problems using evolutionary algorithms,” Kangal Report, 2008. View at: Google Scholar
  37. G. Zhang, J. Liu, and T. Dillon, “Decentralized multi-objective bilevel decision making with fuzzy demands,” Knowledge-Based Systems, vol. 20, pp. 495–507, 2007. View at: Publisher Site | Google Scholar

Copyright © 2012 Tao Zhang et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.


More related articles

 PDF Download Citation Citation
 Download other formatsMore
 Order printed copiesOrder
Views1224
Downloads1043
Citations

Related articles

We are committed to sharing findings related to COVID-19 as quickly as possible. We will be providing unlimited waivers of publication charges for accepted research articles as well as case reports and case series related to COVID-19. Review articles are excluded from this waiver policy. Sign up here as a reviewer to help fast-track new submissions.