Abstract

Bilevel programming is characterized by two optimization problems located at different levels, in which the constraint region of the upper level problem is implicitly determined by the lower level problem. This paper is focused on a class of bilevel programming with a linear lower level problem and presents a new algorithm for solving this kind of problems by combining an evolutionary algorithm with the duality principle. First, by using the prime-dual conditions of the lower level problem, the original problem is transformed into a single-level nonlinear programming problem. In addition, for the dual problem of the lower level, the feasible bases are taken as individuals in population. For each individual, the values of dual variables can be obtained by taking the dual problem into account, thus simplifying the single-level problem. Finally, the simplified problem is solved, and the objective value is taken as the fitness of the individual. Besides, when nonconvex functions are involved in the upper level, a coevolutionary scheme is incorporated to obtain global optima. In the computational experiment, 10 problems, smaller or larger-scale, are solved, and the results show that the proposed algorithm is efficient and robust.

1. Introduction

In a hierarchical system involving two levels of decision making with different objectives, the decision makers are divided into two categories: the upper level (leader) and the lower level (follower). The upper level controls a subset of decision variables, while the lower level controls remaining decision variables. To each decision made by the upper level, the lower level responds by a decision optimizing its objective function over a constraint set which depends upon the decision of the upper level. Bilevel programming (BLPP), as a hierarchical optimization problem, has been proposed for dealing with this kind of decision processes, which is characterized by the existence of two optimization problems, the upper level and the lower level problems. The distinguished feature of this kind of problems is that the constraint region of the upper level problem is implicitly determined by the lower level problem, and any feasible solution must satisfy the optimality of the lower level problem as well as all constraints. The general bilevel programming problem can be formulated as follows: where and are called the upper level and the lower level variables, respectively. In a similar way, is known as the upper (lower) level objective function, whereas the vector-valued functions and are usually referred to as the upper level and the lower level constraints, respectively. In addition, the sets and place additional constraints on the variables, such as upper and lower bounds or integrality requirements. In problem (1.1), are called the upper level and the lower level problems, respectively.

In fact, it is very difficult to solve BLPPs. At first, BLPPs are strongly NP-hard [1]. In addition, the lower level variables always are determined by the upper level variables . Under some relevant assumptions, the relationship between and can be denoted by . Generally speaking, the solution function is nonconvex and even nondifferentiable, which makes BLPP harder to handle.

BLPP is applied extensively to a variety of real-world areas such as economy, engineering, and management [1, 2], and some efficient algorithms and theoretical results are developed for BLPPS, linear or nonlinear [313]. For the linear bilevel programming problem, the optimal solutions can be achieved at some extreme point of the constraint region. Based on the characteristics, some exact algorithmic approaches are given, such as th best method and branch-and-bound algorithm [1, 1416]. For nonlinear bilevel programming, there exist few literature [2, 8], and the most procedures can only guarantee the local optima are obtained. In recent years, evolutionary algorithms (EAs) with global convergence are often adopted to solve BLPPs. Kuo developed a particle swarm optimization (PSO) algorithm for linear BLPPs [17], in which only the upper space is searched by the designed PSO algorithm. Wang et al. discussed nonlinear BLPPs with nonconvex objective functions and proposed a new evolutionary algorithm [18]. Deb presented a hybrid evolutionary-cum-local-search-based algorithm for general nonlinear BLPPs [19]. In Deb’s method, both the upper and the lower level problems are solved by EAs. These proposed EAs share a common hierarchical structure, that is to say, EAs are used to explore the upper level spaces, whereas for each fixed value of the upper level variable, one has to solve the lower level problem. It means that these algorithms are time consuming when the upper level space become larger or the lower level problem is larger scale. In order to overcome the shortcomings, some optimality results are applied in the design of algorithmic approaches. Based on the extreme points of constraint region, Calvete designed a genetic algorithm for linear BLPPs [20], but it is hard to extend the methods to nonlinear cases except for some special cases involving concave functions. Besides, the K-K-T conditions are also applied to convert BLPPs into single-level problems [8, 21].

In this paper, we discuss a class of BLPPs with linear lower level problem, and based on the optimality results of linear programming, develop an efficient evolutionary algorithm which begins with the bases of the follower’s dual problem. First, the lower level problem is replaced by the prime-dual conditions. After doing so, the original problem is transformed into an equivalent single-level nonlinear programming. Then, in order to reduce the number of variables and improve the performance of the algorithm, we encode individuals by taking the bases of the duality problem. For each individual given, the dual variables can be solved, and some constraints can also be removed from the original single-level problem. Finally, the simplified nonlinear programming is solved, and the objective value is taken as the fitness of the individual. The proposed algorithm keeps the search space finite and can be used to deal with some nonlinear BLPPs.

This paper is organized as follows. Some notations and transformation are presented in Section 2, an evolutionary algorithm is given based on the duality principle in Section 3, and a revised algorithm is proposed for handling nonconvex cases in Section 4. Experimental results and comparison are presented in Section 5. We finally conclude our paper in Section 6.

2. Preliminaries

In this paper, the following nonlinear bilevel programming is discussed: where , , , , and is a box set.

Now we introduce some related definitions [1].(1)Constraint region: .(2)For fixed, the feasible region of lower level problem: .(3)Projection of onto the upper level decision space: .(4)The lower level rational reaction set for each : .(5)Inducible region: .

Hence, problem (2.1) can also be written as

We always assume that is nonempty and compact to ensure that problem (2.1) can be well posed.

Note that for fixed in the lower level problem, the term is constant, it can be removed when the lower level problem is solved. As a result, the lower level problem can be replaced by

Equation (2.3) is a linear programming parameterized by , then its dual problem can be written as

According to the duality principle, For each fixed , if there exists satisfying Then, is an optimal solution to (2.3). It follows that (2.1) can be converted to the following single-level problem: Obviously, (2.6) is a nonlinear programming problem.

It is computationally expensive for one to directly solve the problem (2.6). In fact, is taken from the dual problem (2.4), for each , it should be a feasible solution of (2.4). Hence, if can be obtained in advance, then (2.6) can be simplified.

The algorithmic profile can be described as follows: We begin with (2.4), and encode each potential basis as an individual. For each individual, we solve the constraints to obtain a basic feasible solution . Also, the constraint can be removed since the inequality always holds. That is to say, if is obtained, (2.6) can be written as which is a simplified version of (2.6).

For each obtained from (2.4), we solve the nonlinear programming (2.7) and if any, obtain an optimal solution. It indicates if one can search all bases of problem (2.4), then he can obtain the optimal solutions of BLPP (2.1) by taking the minimum one among all objective values of (2.7). In spite of the fact that when the dual variable vector is obtained, can also be got according to the duality theorems, we do not intend to adopt the procedure to obtain . The main reason is that when the follower has more than one optimal solution, the procedure cannot guarantee that the obtained is the best one to .

3. Solution Method

In spite of the fact that (2.6) has been simplified as (2.7), it is still hard to solve when the upper level functions are nonconvex and even nondifferentiable. In this section, we first present an EA for the case in which the upper level problem can be solved by deterministic methods, such as convex programs. In the next section, the method will be extended to more general cases.

3.1. Chromosome Encoding

The feasible bases of (2.4) are encoded as chromosomes. First, (2.4) is standardized as Let which is an matrix, and columns in , (), are chosen at random. If the matrix consisting of these columns is a feasible basis, then it is referred to as an individual (chromosome) and denoted by ().

3.2. Initial Population

The initial population consists of individuals associated with the feasible bases of (3.1). In order to obtain the individuals, we first take an , then solve problem (3.1) using the simplex method. Since different feasible bases can be found at each iteration of vertices, these bases can be put into the initial population. If the number of these feasible bases is less than , we take a substitutive and repeat the procedure of solving (3.1) until individuals are found.

3.3. Fitness Evaluation

For each individual , the fitness of can be obtained by the following steps. First, the basic components of are taken as , whereas other components are 0. Then, for the known , we solve problem (2.7). If there exists an optimal solution, then the objective value is taken as the fitness of ; otherwise, , where is a positive number large enough.

3.4. Crossover and Mutation Operators

Let and be selected parents for crossover. We give the crossover offspring of and as follows. Let and standing for the size of . If , then elements are chosen at random in . These selected elements are taken as entering variables for the present basis associated with and the leaving variables in can be determined by using the minimum ratio principle. As a result, an offspring is generated. If , let and then execute the procedure mentioned above. Let , then the other offspring can be obtained.

Let be a selected parent for mutation. We give the mutation offspring as follows. First, one element is randomly chosen from the set , then is taken as an entering variable, and the leaving variable from can be determined according to the minimum ratio principle. The new basis obtained by the pivot operation is referred as the offspring of mutation.

3.5. An Evolutionary Algorithm Based on Duality Principle (EADP)

In this subsection, we present an evolutionary algorithm based on the encoding scheme, fitness function, and evolutionary operators described above. In each generation, an archive of (larger than the size of population) individuals is maintained to avoid evaluating repeatedly the fitness of the same individuals generated in genetic operations.

Step 1 (initialization). Generate initial points , by taking different . All of the points form an initial population with population size . Let .

Step 2 (fitness). Evaluate the fitness value of each point in . Set .

Step 3 (crossover). Let the crossover probability be . For crossover parents , we first take a at random, if , then execute the crossover on to get its offspring and . Let stand for the set of all these offspring.

Step 4 (mutation). Let the mutation probability be . For each , we take a at random, if , then execute the mutation on to get its offspring . Let stand for the set of all these offspring.

Step 5 (selection). Let . For any point in , if the individual belongs to , then the fitness value is there, otherwise, one has to evaluate the fitness value of the point. Select the best points from the set and randomly select points from the remaining points of the set. These selected points form the next population .

Step 6 (updating). Merge all points in into . If the number of elements in is larger than , then randomly delete some point such that has only individuals.

Step 7. If the termination condition is met, then stop; otherwise, let , go to Step  3.

EADP differs from other existing algorithms in three ways. First, since EADP only searches the feasible bases of the lower level dual problem instead of taking all feasible points into account, it makes the searching space smaller. In addition, there is a local search procedure in EADP. Sine for fixed, to solve (2.7) is a local search process for , which can improve the performance of the proposed algorithm. Finally, it is necessary for the most of the existing algorithmic procedures to assume that the lower level solution is unique, since the assumption can make the problem become easier. In the proposed EADP, is not directly calculated and chosen for any fixed , as a result, the uniqueness restriction can be removed.

4. A Revised Version for Nonconvex Cases

The solutions of a BLPP must be feasible, that is, for any fixed, must be optimal to the follower problem. The proposed algorithm can guarantee all points are in IR if (2.7) can be solved well. In fact, if (2.7) is a convex programming, there are dozens of algorithms for this kind of problems. However, when the problem is nonconvex, it is very hard for us to obtain a globally optimal solution since the most of existing classical algorithms cannot guarantee to obtain globally optimal solutions. In order to overcome the shortcomings, we use a coevolutionary algorithm to search globally optimal solutions. Set , and for each , stands for the solution obtained by solving (2.7), which may be locally optimal. We generate an initial population as follows: first, , are put into , and then for each , randomly generate points according to Gaussian distribution , and these points are also put into the set. As a result, the total of points are included in . The fitness is given with a multicriteria type; that is, there are evaluation functions: , . To evaluate individual is to compute , , at each point. Next, we present a co-evolutionary algorithmic profile (Co-EA) as follows.(S1) Generate an initial population . Let .(S2) Evaluate all points in .(S3) Execute the arithmetical crossover and Gaussian mutation to generate offspring set .(S4) Evaluate all offspring in , and according to each , select the best points in , total points are taken as the next generation of population .(S5) If the termination criterion is satisfied, then output the best individual associated with each , otherwise, , turn to (S3).

The purpose of Co-EA is to obtain globally optimal solutions to (2.7). Next, we present the revised version of EADP by imbedding Co-EA.

Step 1 (initialization). Generate initial points , by taking different . All of the points form the initial population with population size . Let , .

Step 2 (fitness). Evaluate the fitness value of each point in . Let .

Step 3 (crossover). Let the crossover probability be . For crossover parents , we first take a at random, if , then execute the crossover on to get its offspring . Let stand for the set of all these offspring.

Step 4 (mutation). Let the mutation probability be . For each , we first take a at random, if , then execute the mutation on to get its offspring . Let stand for the set of all these offspring.

Step 5 (offspring evaluating). Let . For any point in , if the individual is in , the fitness value is there, otherwise, one has to evaluate the fitness value of the point.

Step 6 (selection). Select the best points from the set and randomly select points from the remaining points of the set. These selected points form the next population .

Step 7 (coevolution). If and mod , then select the best one and other individuals from , execute Co-EA, and update the fitness values of these individuals.

Step 8 (updating). If element number in is less than , then some points in , especially, modified points by Co-EA, are put into until there are elements.

Step 9. If the termination condition is met, then stop; otherwise, let , go to Step  3.

5. Simulation Results

In this section, we first select 7 test problems from the literature as follows, which are denoted by Example  1–Example  7 (Group I). These smaller-scale problems, as examples, are frequently solved to illustrate the performance of the algorithms in the literature.

Example 1 (see [8]). Consider

Example 2 (see [11]). Consider

Example 3 (see [21]). Consider

Example 4 (see [7]). Consider

Example 5 (see [8]). Consider

Example 6 (see [1]). Consider

Example 7 (see [12]). Consider

In order to illustrate the performance of EADP, we construct 3 larger-scale linear BLPPs (Group II, Example  8–Example  10) with the following type:

All coefficients are taken as follows. The coefficients of the upper level objective are randomly generated from the uniform distribution on , whereas those of the lower level objective are randomly chosen in . and are generated from a uniform distribution on , and the right-hand side of each constraint is the sum of absolute values of the left-hand side coefficients. In order to assure the constraint region of the generated BLPP is bounded, we select a constraint randomly, and the left-hand side coefficients are taken as the absolute values of the corresponding coefficients. The scales of the constructed problems are given in Table 1.

The proposed algorithm is used to solve the problems in Group I and Group II. The values of the parameters in EADP are given in Table 2, where Max-gen stands for the maximum number of generations that the algorithm runs. Notice that in the proposed algorithmic approach, the total of feasible points in search space is not greater than . Hence, the value can be taken as a reference point when , and Max-gen need to be determined. For a specific problem, if one wants to determine Max-gen in advance, then he can tentatively take an experiment value of this parameter on other problems with same lower level scale. For parameter , one can first take a positive integer by considering the maximum value of on and then examine the constraint violations of obtained points. If the constraint violations can not be permitted, then the integer is multiplied by 10. This process is not stopped until the constraint violations are permitted.

For Example  1–Example  10, the algorithm stops when the maximum number of generations is achieved. In the executed algorithm, (2.7) is solved by different methods. The simplex method is adopted in Example  1–Example  5 and Group II, the active-set method is applied in Example  6 in which (2.7) is a convex quadratic programming. For Example  7, the revised EADP is executed since the objective is nonconvex. is taken as 5, , , and the maximum generation number of Co-EA is 100.

For Group I, we execute EADP in 20 independent runs on each problem on a PC (Intel Pentium IV-2.66 GHz) and record the following data.(1) Best solution .(2) Upper level objective value at the best solution.(3) Upper level objective value at the worst solution .(4) Mean value , median , and standard deviation (STD) of the upper level objective values.(5) Mean CPU time in 20 runs.

All results for Group I are presented in Table 3. Table 3 provides the comparison of the results found by EADP in 20 runs and by the compared algorithms for these examples, and the best solutions found by EADP in 20 runs are also presented in Table 3, where Ref. stands for the related algorithms in references in Table 3.

It can be seen from Table 3 that for Example  3 and Example  7, all results found by EADP in 20 runs are better than the results given by the compared algorithms in the references, which indicates the algorithms in the related references can not find out the globally optimal solutions of these two problems. Especially, for Example  7, the compared algorithm found a local solution with the objective value of 10.625. For other problems, the best results found by EADP are as good as those by the compared algorithms. In all 20 runs, EADP found the best results of all problems, and the standard deviations are 0, which means that EADP is stable.

From the CPU time, one can see that the CPU time that EADP needs is short for obtaining these results, it means that the algorithm is efficient.

For Group II, we execute EADP in 10 independent runs on each problem. Since there is no computational result given in the literature, we give a criterion for measuring the simulation results obtained by EADP. For the best one in 10 independent runs, we select the best individual and randomly generate other individuals to form a population. The algorithm begins with the population and is executed successively 5000 generations again. The obtained solutions are taken as the “best” results.

All obtained results in Group II are presented in Table 4. Where CPU stands for the mean CPU time in 10 runs, TB represents the times that the best result appears in 10 runs; STD denotes the standard deviation of the objectives in 10 runs, and is the best objective value. From Group I to Group II, one can see that the increments of CPU are larger, which implies that the computational complex of BLPP will sharply increase as the dimension become larger. Despite the fact that STD is larger due to larger objective values, TB shows that EADP is reliable in solving larger-scale BLPPs since the minimum rate of success is 70%.

In order to present the convergence of EADP on the larger-scale problems, we select one in 10 runs for each problem and plot the curves of objective values, and the generations, refer to Figure 1(a) for Example  8, (b) for Example  9, and (c) for Example  10. From Figure 1, we can see the proposed algorithm converges in 3000 generations.

6. Conclusion

For a class of nonlinear bilevel programming problems, the paper develops an evolutionary algorithm based on the duality principle. In the proposed algorithm, the bases of the dual problem of the lower level are used to encode individuals, which leads to that the searching space becomes finite. For nonconvex cases, we design a Co-EA technique to obtain globally optimal solutions. The experiment results show the proposed algorithm is efficient and effective. In the future work, based on optimality results, we intend to research nonlinear BLPPs with more general lower level problems.

Acknowledgments

This research work was supported in part by the National Natural Science Foundations of China (no. 61065009) and the Natural Science Foundation of Qinghai Province (Innovation Research Foundation of Qinghai Normal University) (no. 2011-z-756).