Abstract

An improved particle swarm optimization (PSO) algorithm is proposed for solving bilevel multiobjective programming problem (BLMPP). For such problems, the proposed algorithm directly simulates the decision process of bilevel programming, which is different from most traditional algorithms designed for specific versions or based on specific assumptions. The BLMPP is transformed to solve multiobjective optimization problems in the upper level and the lower level interactively by an improved PSO. And a set of approximate Pareto optimal solutions for BLMPP is obtained using the elite strategy. This interactive procedure is repeated until the accurate Pareto optimal solutions of the original problem are found. Finally, some numerical examples are given to illustrate the feasibility of the proposed algorithm.

1. Introduction

Bilevel programming problem (BLPP) arises in a wide variety of scientific and engineering applications including optimal control, process optimization, game-playing strategy development, and transportation problem Thus, the BLPP has been developed and researched by many scholars. The reviews, monographs, and surveys on the BLPP can refer to [111]. Moreover, the evolutionary algorithms (EA) have been employed to address BLPP in papers [1216].

However, the bilevel multiobjective programming problem (BLMPP) has seldom been studied. Shi and Xia [17, 18], Abo-Sinna and Baky [19], Nishizaki and Sakawa [20], and Zheng et al. [21] presented an interactive algorithm for BLMPP. Eichfelder [22] presented a method for solving nonlinear bilevel multiobjective optimization problems with coupled upper level constraints. Thereafter, Eichfelder [23] developed a numerical method for solving nonlinear nonconvex bilevel multiobjective optimization problems. In recent years, the metaheuristic has attracted considerable attention as an alternative method for BLMPP. For example, Deb and Sinha [2426] as well as Sinha and Deb [27] discussed BLMPP based on evolutionary multiobjective optimization principles. Based on those studies, Deb and Sinha [28] proposed a viable and hybrid evolutionary-local-search-based algorithm and presented challenging test problems. Sinha [29] presented a progressively interactive evolutionary multiobjective optimization method for BLMPP.

Particle swarm optimization (PSO) is a relatively novel heuristic algorithm inspired by the choreography of a bird flock, which has been found to be quite successful in a wide variety of optimization tasks [30]. Due to its high speed of convergence and relative simplicity, the PSO algorithm has been employed by many researchers for solving BLPPs. For example, Li et al. [31] proposed a hierarchical PSO for solving BLPP. Kuo and Huang [32] applied the PSO algorithm for solving bilevel linear programming problem. Gao et al. [33] presented a method to solve bilevel pricing problems in supply chains using PSO. However, it is worth noting that the papers mentioned above are only for bilevel single objective problems.

In this paper, an improved PSO is presented for solving BLMPP. The algorithm can be outlined as follows. The BLMPP is transformed to solve multiobjective optimization problems in the upper level and the lower level interactively by an improved PSO. And a set of approximate Pareto optimal solutions for BLMPP is obtained using the elite strategy. The above interactive procedure is repeated for a predefined count, and then the accurate Pareto optimal solutions of the BLMPP will be achieved. Towards these ends, the rest of the paper is organized as follows. In Section 2, the problem formulation is provided. The proposed algorithm for solving bilevel multiobjective problem is presented in Section 3. In Section 4, some numerical examples are given to demonstrate the proposed algorithm, while the conclusion is reached in Section 5.

2. Problem Formulation

Let 𝑥𝑅𝑛1, 𝑦𝑅𝑛2, 𝐹𝑅𝑛1×𝑅𝑛2𝑅𝑚1, 𝑓𝑅𝑛1×𝑅𝑛2𝑅𝑚2, 𝐺𝑅𝑛1×𝑅𝑛2𝑅𝑝, and 𝑔𝑅𝑛1×𝑅𝑛2𝑅𝑞. The general model of the BLMPP can be written as follows: min𝑥𝐹(𝑥,𝑦)s.t.𝐺(𝑥,𝑦)0,min𝑦𝑓(𝑥,𝑦)s.t.𝑔(𝑥,𝑦)0,(2.1) where 𝐹(𝑥,𝑦) and 𝑓(𝑥,𝑦) are the upper level and the lower level objective functions, respectively. 𝐺(𝑥,𝑦) and 𝑔(𝑥,𝑦) denote the upper level and the lower level constraints, respectively. Let 𝑆={(𝑥,𝑦)𝐺(𝑥,𝑦)0,𝑔(𝑥,𝑦)0}, 𝑋={𝑥𝑦,𝐺(𝑥,𝑦)0,𝑔(𝑥,𝑦)0}, 𝑆(𝑥)={𝑦𝑔(𝑥,𝑦)0}, and for the fixed 𝑥𝑋, let 𝑆(𝑋) denote the weak efficiency set of solutions to the lower level problem, the feasible solution set of problem (2.1) is denoted as IR={(𝑥,𝑦)(𝑥,𝑦)𝑆,𝑦𝑆(𝑋)}.

Definition 2.1. For a fixed 𝑥𝑋, if 𝑦 is a Pareto optimal solution to the lower level problem, then (𝑥,𝑦) is a feasible solution to the problem (2.1).

Definition 2.2. If (𝑥,𝑦) is a feasible solution to the problem (2.1) and there are no (𝑥,𝑦)IR, such that 𝐹(𝑥,𝑦)𝐹(𝑥,𝑦), then (𝑥,𝑦) is a Pareto optimal solution to the problem (2.1), where “” denotes Pareto preference.

For problem (2.1), it is noted that a solution (𝑥,𝑦) is feasible for the upper level problem if and only if 𝑦 is an optimal solution for the lower level problem with 𝑥=𝑥. In practice, we often make the approximate Pareto optimal solutions of the lower level problem as the optimal response feedback to the upper level problem, and this point of view is accepted usually. Based on this fact, the PSO algorithm may have a great potential for solving BLMPP. On the other hand, unlike the traditional point-by-point approach mentioned in Section 1, the PSO algorithm uses a group of points in its operation thus, the PSO can be developed as a new way for solving BLMPP. In the following, we present an improved PSO algorithm for solving problem (2.1).

3. The Algorithm

The process of the proposed algorithm is an interactive coevolutionary process for both the upper level and the lower level. We first initialize population and then solve multiobjective optimization problems in the upper level and the lower level interactively using an improved PSO. Afterwards, a set of approximate Pareto optimal solutions for problem 1 is obtained by the elite strategy which was adopted in Deb et al. [34]. This interactive procedure is repeated until the accurate Pareto optimal solutions of problem (2.1) are found. The details of the proposed algorithm are given as follows:

3.1. Algorithm

Step 1. Initialize.Substep 1.1. Initialize the population 𝑃0 with 𝑁𝑢 particles which is composed by 𝑛𝑠=𝑁𝑢/𝑁𝑙 subswarms of size 𝑁𝑙 each. The particle’s position of the 𝑘th(𝑘=1,2,,𝑛𝑠) subswarm is presented as 𝑧𝑗=(𝑥𝑗,𝑦𝑗)(𝑗=1,2,,𝑛𝑙), and the corresponding velocity is presented as: 𝑣𝑗=(𝑣𝑥𝑗,𝑣𝑦𝑗)(𝑗=1,2,,𝑛𝑙), 𝑧𝑗 and 𝑣𝑗 are sampled randomly in the feasible space, respectively.Substep 1.2. Initialize the external loop counter 𝑡=0.

Step 2. For the 𝑘th subswarm (𝑘=1,2,,𝑛𝑠), each particle is assigned a nondomination rank ND𝑙 and a crowding value CD𝑙 in 𝑓 space. Then, all resulting subswarms are combined into one population which is named as the 𝑃𝑡. Afterwards, each particle is assigned a nondomination rank ND𝑢 and a crowding value CD𝑢 in 𝐹 space.

Step 3. The nondomination particles assigned both ND𝑢=1 and ND𝑙=1 from 𝑃𝑡 are saved in the elite set 𝐴𝑡.

Step 4. For the 𝑘th subswarm (𝑘=1,2,,𝑛𝑠), update the lower level decision variables.Substep 4.1. Initialize the lower level loop counter 𝑡𝑙=0.Substep 4.2. Update the 𝑗th (𝑗=1,2,,𝑁𝑙) particle’s position and velocity with the fixed 𝑥𝑗 and the fixed 𝑣𝑗 using 𝑣𝑡𝑙𝑦+1𝑗=𝑤𝑙𝑣𝑡𝑙𝑦𝑗+𝑐1𝑙𝑟1𝑙𝑝𝑝best𝑦𝑗𝑧𝑡𝑙𝑗+𝑐2𝑙𝑟2𝑙𝑝𝑔best𝑙𝑧𝑡𝑙𝑗,𝑧𝑡𝑙𝑗+1=𝑧𝑡𝑙𝑗+𝑣𝑡𝑙𝑦+1𝑗.(3.1)Substep 4.3. Consider  𝑡𝑙=𝑡𝑙+1.Substep 4.4. If 𝑡𝑙𝑇𝑙, go to Substep 4.5. Otherwise, go to Substep 4.2.Substep 4.5. Each particle of the 𝑖th subswarm is reassigned a nondomination rank ND𝑙 and a crowding value CD𝑙 in 𝐹 space. Then, all resulting subswarms are combined into one population which is renamed as the 𝑄𝑡. Afterwards, each particle is reassigned a nondomination rank ND𝑢 and a crowding value CD𝑢 in 𝐹 space.

Step 5. Combine population 𝑃𝑡 and 𝑄𝑡 to form 𝑅𝑡. The combined population 𝑅𝑡 is reassigned a nondomination rank ND𝑢, and the particles within an identical nondomination rank are assigned a crowding distance value CD𝑢 in the 𝐹 space.

Step 6. Choose half particles from 𝑅𝑡. The particles of rank ND𝑢=1 are considered first. From the particles of rank ND𝑢=1, the particles with ND𝑙=1 are noted one by one in the order of reducing crowding distance CD𝑢, for each such particle the corresponding subswarm from its source population (either 𝑃𝑡 or 𝑄𝑡) is copied in an intermediate population 𝑆𝑡. If a subswarm is already copied in 𝑆𝑡 and a future particle from the same subswarm is found to have ND𝑢=ND𝑙=1, the subswarm is not copied again. When all particles of ND𝑢=1 are considered, a similar consideration is continued with ND𝑢=2 and so on till exactly 𝑛𝑠 subswarms are copied in 𝑆𝑡.

Step 7. Update the elite set 𝐴𝑡. The nondomination particles assigned both ND𝑢=1 and ND𝑙=1 from 𝑆𝑡 are saved in the elite set 𝐴𝑡.

Step 8. Update the upper level decision variables in 𝑆𝑡.Substep 8.1. Initiate the upper level loop counter 𝑡𝑢=0.Substep 8.2. Update the 𝑖th(𝑖=1,2,,𝑁𝑢) particle’s position and velocity with the fixed 𝑦𝑖 and the fixed 𝑣𝑖 using 𝑣𝑡𝑢𝑥+1𝑖=𝑤𝑢𝑣𝑡𝑢𝑥𝑖+𝑐1𝑢𝑟1𝑢𝑝𝑝best𝑥𝑖𝑧𝑡𝑢𝑖+𝑐2𝑢𝑟2𝑢𝑝𝑔best𝑢𝑧𝑡𝑢𝑖,𝑧𝑡𝑢𝑖+1=𝑧𝑡𝑢𝑖+𝑣𝑡𝑢𝑥+1𝑖.(3.2)Substep 8.3. Consider  𝑡𝑢=𝑡𝑢+1.Substep 8.4. If 𝑡𝑢𝑇𝑢, go to Substep 8.5. Otherwise, go to Substep 8.2.Substep 8.5. Every member is then assigned a nondomination rank ND𝑢 and a crowding distance value CD𝑢 in 𝐹 space.

Step 9. Consider  𝑡=𝑡+1.

Step 10. If 𝑡𝑇, output the elite set 𝐴𝑡. Otherwise, go to Step 2.
In Steps 4 and 8, the global best position is chosen at random from the elite set 𝐴𝑡. The criterion of personal best position choice is that if the current position is dominated by the previous position, then the previous position is kept; otherwise, the current position replaces the previous one; if neither of them is dominated by the other, then we select one of them randomly. A relatively simple scheme is used to handle constraints. Whenever two individuals are compared, their constraints are checked. If both are feasible, nondomination sorting technology is directly applied to decide which one is selected. If one is feasible and the other is infeasible, the feasible dominates. If both are infeasible, then the one with the lowest amount of constraint violation dominates the other. Notations used in the proposed algorithm are detailed in Table 1.

4. Numerical Examples

In this section, three examples will be considered to illustrate the feasibility of the proposed algorithm for problem (2.1). In order to evaluate the closeness between the obtained Pareto optimal front and the theoretical Pareto optimal front, as well as the diversity of the obtained Pareto optimal solutions along the theoretical Pareto optimal front, we adopted the following evaluation metrics.

4.1. Generational Distance (GD)

This metric used by Deb [35] is employed in this paper as a way of evaluating the closeness between the obtained Pareto optimal front and the theoretical Pareto optimal front. The GD metric denotes the average distance between the obtained Pareto optimal front and the theoretical Pareto optimal front:GD=𝑛𝑖=1𝑑2𝑖𝑛,(4.1)

where 𝑛 is the number of the obtained Pareto optimal solutions by the proposed algorithm and 𝑑𝑖 is the Euclidean distance between each obtained Pareto optimal solution and the nearest member of the theoretical Pareto optimal set.

4.2. Spacing (SP)

This metric is used to evaluate the diversity of the obtained Pareto optimal solutions by comparing the uniform distribution and the deviation of solutions as described by Deb [35]:SP=𝑀𝑚=1𝑑𝑒𝑚+𝑛𝑖=1𝑑𝑑𝑖2𝑀𝑚=1𝑑𝑒𝑚+𝑛𝑑,(4.2)

where 𝑑𝑖=min𝑗(|𝐹𝑖1(𝑥,𝑦)𝐹𝑗1(𝑥,𝑦)|+|𝐹𝑖2(𝑥,𝑦)𝐹𝑗2(𝑥,𝑦)|), 𝑖,𝑗=1,2,,𝑛,𝑑 is the mean of all 𝑑𝑖, 𝑑𝑒𝑚 is the Euclidean distance between the extreme solutions in obtained Pareto optimal solution set and the theoretical Pareto optimal solution set on the 𝑚th objective, 𝑀 is the number of the upper level objective function, 𝑛 is the number of the obtained solutions by the proposed algorithm.

The PSO parameters are set as follows: 𝑟1𝑢,𝑟2𝑢,𝑟1𝑙,𝑟2𝑙random(0,1), the inertia weight 𝑤𝑢=𝑤𝑙=0.7298, and acceleration coefficients with 𝑐1𝑢=𝑐2𝑢=𝑐1𝑙=𝑐2𝑙=1.49618. All results presented in this paper have been obtained on a personal computer (CPU: AMD Phenom II X6 1055T 2.80 GHz; RAM: 3.25 GB) using a C# implementation of the proposed algorithm, and the figures were obtained using the origin 8.0.

Example 4.1. Example 4.1 is taken from [22]. Here 𝑥𝑅1, 𝑦𝑅2. In this example, the population size and iteration times are set as follows: 𝑁𝑢=200, 𝑇𝑢=200, 𝑁𝑙=40, 𝑇𝑙=40, and 𝑇=40: min𝑥𝐹𝑦(𝑥,𝑦)=1𝑥,𝑦2s.t.𝐺1(𝑦)=1+𝑦1+𝑦20min𝑦𝑦𝑓(𝑥,𝑦)=1,𝑦2s.t.𝑔1(𝑥,𝑦)=𝑥2𝑦21𝑦220,1𝑦1,𝑦21,0𝑥1.(4.3) Figure 1 shows the obtained Pareto front of this example by the proposed algorithm. From Figure 1, it can be seen that the obtained Pareto front is very close to the theoretical Pareto optimal front, and the average distance between the obtained Pareto optimal front and the theoretical Pareto optimal front is 0.00026, that is, GD=0.00026 (see Table 2). Moreover, the lower SP value (SP=0.17569, see Table 2) shows that the proposed algorithm is able to obtain a good distribution of solutions on the entire range of the theoretical Pareto optimal front. Figure 2 shows the obtained solutions of this example, which follow the relationship, that is, 𝑦1=1𝑦2, 𝑦2=1/2±(1/4)8𝑥24 and 𝑥(1/2,1). It is also obvious that all obtained solutions are close to being on the upper level constraint 𝐺(𝑥) boundary (1+𝑦1+𝑦2=0).

Example 4.2. Example 4.2 is taken from [36]. Here 𝑥𝑅1, 𝑦𝑅2. In this example, the population size and iteration times are set as follows: 𝑁𝑢=200, 𝑇𝑢=50, 𝑁𝑙=40, 𝑇𝑙=20, and 𝑇=40. min𝑥𝑥𝐹(𝑥,𝑦)=2+𝑦112+𝑦22,(𝑥1)2+𝑦112+𝑦22,min𝑦𝑦𝑓(𝑥,𝑦)=21+𝑦22,𝑦1𝑥2+𝑦22,1𝑥,𝑦1,𝑦22.(4.4) Figure 3 shows the obtained Pareto optimal front of this example by the proposed algorithm. From Figure 3, it is obvious that the obtained Pareto optimal front is very close to the theoretical Pareto optimal front, the average distance between the obtained Pareto optimal front and the theoretical Pareto optimal front is 0.00004 (see Table 2). On the other hand, the obtained Pareto optimal solutions can be distributed uniformly on entire range of theoretical Pareto optimal front based on the fact that the SP value is lower (SP=0.00173, see Table 2). Figure 4 shows the obtained Pareto optimal solutions; they follow the relationship, that is, 𝑥=𝑦1, 𝑦1[0.5,1] and 𝑦2=0.

Example 4.3. Example 4.3 is taken from [37], in which the theoretical Pareto optimal front is not given. Here 𝑥𝑅2, 𝑦𝑅3. In this example, the population size and iteration times are set as follows: 𝑁𝑢=100, 𝑇𝑢=50, 𝑁𝑙=20, 𝑇𝑙=10, and 𝑇=40: max𝑥𝐹𝑥(𝑥,𝑦)=1+9𝑥2+10𝑦1+𝑦2+3𝑥3,9𝑥1+2𝑥2+2𝑦1+7𝑦2+4𝑥3,s.t.𝐺1(𝑥,𝑦)=3𝑥1+9𝑥2+9𝑦1+5𝑦2+3𝑦3𝐺1039,2(𝑥,𝑦)=4𝑥1𝑥2+3𝑦13𝑦2+2𝑦394,min𝑦𝑓(𝑥,𝑦)=4𝑥1+6𝑥2+7𝑦1+4𝑦2+8𝑦3,6𝑥1+4𝑥2+8𝑦1+7𝑦2+4𝑦3,s.t.𝑔1(𝑥,𝑦)=3𝑥19𝑥29𝑦14𝑦2𝑔61,2(𝑥,𝑦)=5𝑥1+9𝑥2+10𝑦1𝑦22𝑦3𝑔924,3(𝑥,𝑦)=3𝑥13𝑥2+𝑦2+5𝑦3𝑥420,1,𝑥2,𝑦1,𝑦2,𝑦30.(4.5) Figure 5 shows the obtained Pareto optimal front of Example 4.3 by the proposed algorithm. Figure 6 shows all five constrains for all obtained Pareto optimal solutions and it can be seen that the 𝐺1, 𝑔2 and 𝑔3 are active constrains. Note that, Zhang et al. [37] only obtained a single optimal solution 𝑥=(146.2955,28.9394), and 𝑦=(0,67.9318,0)which lies on the maximum of the 𝐹2 using weighted sum method. In contrast, a set of Pareto optimal solutions is obtained by the proposed algorithm. However, the fact that the single optimal solution in [37] is included in the obtained Pareto optimal solutions illustrates the feasibility of proposed algorithm.

5. Conclusion

In this paper, an improved PSO is presented for BLMPP. The BLMPP is transformed to solve the multiobjective optimization problems in the upper level and the lower level interactively using the proposed algorithm for a predefined count. And a set of accurate Pareto optimal solutions for BLMPP is obtained by the elite strategy. The experimental results illustrate that the obtained Pareto front by the proposed algorithm is very close to the theoretical Pareto optimal front, and the solutions are also distributed uniformly on entire range of the theoretical Pareto optimal front. Furthermore, the proposed algorithm is simple and easy to implement. It also provides another appealing method for further study on BLMPP.

Acknowledgment

This work is supported by the National Science Foundation of China (71171151, 50979073).