Abstract

For a class of stochastic linear bilevel programming problem, we firstly transform it into a deterministic linear bilevel covariance programming problem. Then, the deterministic bilevel covariance programming problem is solved by backpropagation artificial neural network based on elite particle swam optimization algorithm (BPANN-PSO). Finally, we perform the simulation experiments and the results show that the computational efficiency of the proposed algorithm has a potential upside compared with the classical algorithm.

1. Introduction

The bilevel programming problem (BLP) is a nested optimizations problem in which the feasible region of the upper level problem is determined implicitly by the solution set of the lower level problem. As an optimization tool, the BLP has been widely used in variety practical problems, for example, in homeland security [13], model production processes [4], the optimal tax policies formulation [57], the strategic for deregulating markets [8], and the optimization of retail channel structures [9]. In addition, the optimization theory of the BLP has been integrated in many other disciplines, such as in management [10, 11], chemical engineering [12, 13], structural optimization [14, 15], optimal control problems [16, 17], facility location [10, 18, 19], and transportation [2022].

Therefore, many researchers are devoted to develop the algorithms for BLP and propose many efficient algorithms. Traditional methods commonly used to handle BLP include Karus-Kuhn-Tucker approach [2326], Branch-and-bound method [27], and penalty function approach [2831]. Despite a significant progress made in traditional optimization towards solving BLPPs, the properties such as differentiation and continuity are necessary for these algorithms.

Due to the limitation of the traditional algorithms, the heuristics such as evolutionary algorithms are recognized as potent tools for solving BLPPs. Mathieu et al. [32] firstly developed the genetic algorithm (GA) for bilevel linear programming problem. Motivated by the same reason, other kinds of GAs for BLPPs were also presented in [3336]. Owing to its high speed of convergence and relative simplicity, the particle swam optimization (PSO) algorithm has been employed for solving BLP problems recently [3742].

However, it is worth noting that the papers mentioned above only focus on deterministic bilevel programming problem and the stochastic bilevel programming has seldom been studied so far. In 1999, Patriksson and Wynter [43] firstly proposed the stochastic mathematical programs with equilibrium constraints and introduced a framework for hierarchical decision-making problem under uncertainty. However, they did not give a numerical experiment. Gao, Liu, and Gen [44] presented a hybrid intelligent algorithm for a decentralized multilevel decision-making problem in stochastic environment in 2004. For the contracting arrangements of the long-term contracts and the spot markets transactions under uncertain electricity spot market, Wan et al. [45] proposed a stochastic bilevel programming model for the optimal bidding strategies between power seller and buyer. They solved the model by Monte Carlo approximation method. It is worth noting that the decision variable in both levels are one-dimensional variable. Soon after, Wan, Fan, and Wang [46] proposed an interactive fuzzy decision-making method for the model in [45]. In 2013, He and Feng [47] presented an approximation algorithm for the compensated stochastic bilevel programming problem. In addition, the stability analysis and convergence analysis for bilevel stochastic programming problem can be seen in [4850]. Obviously, they only researched the simple stochastic bilevel model and few of them have studied the numerical performance of the algorithm.

In this paper, we consider the general stochastic linear bilevel programming problem in which the coefficient of objective functions and the coefficient of constraint functions are random variables. For the problem, we firstly transformed it into a deterministic linear bilevel covariance programming problem with expected constraints, and then the deterministic bilevel covariance programming model is solved by the BPANN-PSO algorithm. Finally, we perform the simulation experiments and the results suggest that the variance obtained by our algorithm is better than the results in reference when the means of the upper objective function value is same. Furthermore, the computational efficiency of our algorithm performs better with the dimension increasing.

The rest of this paper is organized as follows. Section 2 introduces the definitions and properties of stochastic linear bilevel programming problem. Section 3 proposes the BPANN-PSO algorithm for stochastic linear bilevel programming problem. We use three test problems from the reference to measure and evaluate the proposed algorithm in Section 4, while the conclusion is reached in Section 5.

2. Stochastic Linear Bilevel Programming

Let , , , , , we consider stochastic linear bilevel programming problem as follows: where and are the upper level and the lower level objective functions, respectively. The coefficient , are the random variable.

Let be a probability of the extent to which the constraint violation is admitted; means a probability measure. The constraint of problem (1) is interpreted as follows:

Inequality (2) means that the constraint may be violated, but at most proportion of the time. Let be the distribution function of random variables , then, inequality (2) is presented asAccording to (2) and (3), we can obtain inequality (4),

Let . Then, from the monotonicity of the distribution function , inequality (2) is rewritten asThat is, where . Based on (6), problem (1) is rewritten as the following problem with deterministic constraints:

To deal with the objective functions with random variables in both levels, the minimum covariance model [51] is applied in this paper. Then, problem (7) is rewritten as follows:where denotes the variance of objective function; and are the covariance matrices of and , respectively. In this paper, we assume that and are positive-definite.

Though problem (8) can guarantee uniform distribution of the objective function values of both levels with a small drop, the decision makers in both levels often have their own expectation. Let and denote the means of the objective function values for the leader and the follower, respectively. Then, problem (8) is rewritten as the follows:where and are the means of and , respectively. and are the means of and , respectively. For the fixed upper level decision variable , the rational reaction set can be solved by problem (10).Let , , . Then, problem (9) is formulated asFor problem (11), the basic notions of BLP are recalled as follows:(a)Constraint region of the BLP:(b)Feasible set for the lower level problem for each fixed :(c)Projection of onto the upper level maker’s decision space:(d)The lower level maker’s rational reaction set for each fixed :(e)Inducible region:

Definition 1. A point is feasible if .

Definition 2. A feasible point is an optimal solution if and , .

Definition 3. If is the optimistic solution for problem (11), is given by .

3. The Algorithm

For problem (11), it is noted that a solution is feasible for the upper level problem if and only if is an optimal solution for the lower level problem with . In practice, we often make the approximate optimal solutions of the lower level problem as the optimal response feedback to the upper level problem, and this point of view is accepted usually. Based on this fact, the BPANN-PSO algorithm may have a great potential for solving problem (11). In the following, the BPANN-PSO is presented for solving problem (11).

3.1. The Structure of BPANN

In this paper, the BPANN includes three layers: the input layer, the hidden layer, and the output layer. The number of nodes per layer is , , and (). The connectivity weight from the node of the input to the node of the hidden layer is . denotes the connectivity weight from the node of the hidden layer to the node of the output layer. Let the Sigmoid function . Figure 1 shows the diagram of the BPANN.

Let represent the input value of the sample. The output of the hidden layer is given by

For the output layer, the output is given by

denotes the node’s expected output of the output layer. The squared differences between the computed and desired outputs are given by (19) and the error sum of all training pairs is given by (20):

3.2. The BPANN-PSO Algorithm

In this algorithm, the position of the particle represents the connectivity weight of the BPANN and the particles in the elite set are the batch training samples. Based on this idea, the BPANN-PSO algorithm can be designed as follows.

Suppose , , , . The particle in can be presented as .

Algorithm 4.
Step 1. Initialize the population with particles and each particle can be presented as . Let the be the number of the particles in the elite set .
Step 2. Train the BPANN using the PSO.
Step 2.1. Calculate its fitness value according to (20).
Step 2.2. Determine its personal best particle and global best particle.
Step 2.3. Update particle’s position and velocity.
Step 3 (stopping criterion). If , stop. Otherwise, go to step 2.
In step 2.3, the inertia weight , the acceleration constants , and .

3.3. The Algorithm for BLP

We proposed the BPANN-PSO for problem (11) and we update the upper level decision variables using the method in [52]. However, the way to solve the lower level problems is completely different. In reference [52], for each updated upper level decision variable, we need to reexecute the lower level optimization problem by cooperative coevolution PSO (CCPSO). In other words, the historical optimal solutions do not contribute to the current optimal solution. In our algorithm, for the updated upper level decision variable, we can predict the corresponding lower level optimal solution directly by BPANN. From the above analysis, we can see that the computational efficiency of our proposed algorithm performs well than the method in the reference [52]. Furthermore, we also give the basic working framework for these two algorithms (see Figure 2)

Algorithm 5.
Step 1. Initialization scheme. Initialize a random population () of the upper level variable .
Step 2. For the fixed (), initialize a random population () of the lower level variable . Then, we perform a lower level optimization procedure to determine the corresponding lower level optimal solution using PSO.
Step 3. Evaluate the fitness value of the complete solutions () by the upper level function and constraints conditions. Copy all the members in an elite set .
Step 4. Update the upper level decision variables using the simulated binary crossover operator (SBX) and the polynomial variation method (PM).
Step 5. For the new upper level variable . Predict the lower level optimal solutions using Algorithm 4.
Step 6. Update the elite set . Copy all the offspring from the previous step to the elite set and choose the best members as the new members of .
Step 7. Perform a termination check. If the termination check is false, go to step 4.
In the Algorithm 5, the PSO parameters are set as follows: the inertia weight , acceleration coefficients , and , . At step 7, the termination criteria based on variance is described as follows. For each upper level variable , the variance at the generation is . Then, the stop metrics is computed asIn this paper, the value of is set as 10−4 for the upper level.

3.4. Algorithm Complexity Analysis

By the algorithm in [51], for the vector with dimension , each component of the vector has two states, zero or nonzero, therefore, the vector has a total of forms. We examine the case where the calculation is the most expensive here, that is, until the last combination satisfies the complementary relaxation condition, suppose the number of vertices is and let the time be for computing the , and the total time required to complete a check is as follows: .

In Algorithm 4 of our paper, the number of nodes of the input layer, the hidden layer, and the output layer are , , and (), respectively. The batch training sample is and the time for computing multiplication is ; for one training sample, the computing time is as follows: . The total computing time of all samples is as follows: . In Algorithm 5, suppose the number of iterations is , the computing time required to solve problem (11) is as follows: . and are almost the same when they have the same operating platform. Comparing and , our algorithm has obvious advantages when the dimension of the decision variable increases.

4. Numerical Experiment

In this section, we test the algorithm using three examples. All results presented in this paper have been obtained on a personal computer (CPU: AMD Phenon(tm)IIX6 1055T 2.80 GHz; RAM: 3.25 GB) using a C# implementation of the proposed algorithm.

Example 1. Let ; we consider the problem from [51]where . . All the right-hand side values are normal random variables. The mean, variance, and probability of are shown in Table 1.
According to the above conditions, problem (22) can be rewritten as follows:We solve problem (23) with constrain of means of the objective function values and we also solved problem (23) without this constrain. From Table 2, we can see that the variance obtained by our algorithm is better than the results in [51] when the means of the upper objective function value is the same. Moreover, the computational efficiency of our algorithm performs better.

Example 2. Let , , , , ; we consider problem from [51]where , , , , , , , , , , and . The expectation of random variable is shown in Table 3. The coefficient matrices of constraints are as follows:According to model (8), problem (24) can be rewritten as follows:For problem (26), we solve the problem with constrain of the means of the objective function values and we also solved it without this constrain. From Table 4, we can see that the variance obtained by our algorithm is better than the results in [51] when the mean of the upper objective function value is almost the same. Furthermore, the computational efficiency of our algorithm performs better with the number of dimension increasing.
Examples 1 and 2 are the general linear stochastic bilevel programming problem; that is to say, the coefficient of objective functions and the coefficient of constraint functions are random variables. Furthermore, the algorithm is also effective for the stochastic linear bilevel programming with chance constants. We consider the compensated stochastic linear bilevel programming as follows.

Example 3. Let , we consider the problem from [47]where , . In this paper, we set the 10 samples model and 30 samples model for problem (27), then, we obtained problem (28) and (29):
10 samples for model:30 samples for model:We solve problem (28) and problem (29) by our algorithm and the method in [47]. From Table 5, we can see that the computational efficiency of our algorithm performs better when they have almost the same optimal solutions.

5. Conclusion and Future Works

In this paper, we designed the BPANN-PSO algorithm to solve the general stochastic linear bilevel programming problem and three test problems from the reference are used to measure and evaluate the proposed algorithm. The results suggest that the variance obtained by our algorithm is better than the results in reference when the mean of the upper objective function value is almost the same. Furthermore, the computational efficiency of our algorithm performs better with the number of the dimensions increasing.

In the future works, we will further discuss how to efficiently use the infeasible solution with good performance near the optimal. This kind of discussion could improve the performance of our BPANN-PSO, particularly when the optimal front lies on the boundaries between the feasible and infeasible regions.

Data Availability

The data used to support the findings of this study are included within the article.

Conflicts of Interest

The authors declare that they have no conflicts of interest.

Acknowledgments

This work is supported by the National Natural Science Foundation of China (61673006) and China Scholarship Council (201708420111).