Table of Contents Author Guidelines Submit a Manuscript
Mathematical Problems in Engineering
Volume 2017 (2017), Article ID 6710929, 15 pages
https://doi.org/10.1155/2017/6710929
Research Article

Solving a Two-Stage Stochastic Capacitated Location-Allocation Problem with an Improved PSO in Emergency Logistics

College of Field Engineering, The PLA University of Science and Technology, Nanjing 210000, China

Correspondence should be addressed to Wanhong Zhu; moc.liamg@6102gnohnawuhz

Received 12 November 2016; Revised 12 March 2017; Accepted 23 March 2017; Published 31 May 2017

Academic Editor: Jorge Magalhaes-Mendes

Copyright © 2017 Ye Deng et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Abstract

A stochastic expected value model and its deterministic conversion are developed to formulate a two-stage stochastic capacitated location-allocation (LA) problem in emergency logistics; that is, the number and capacities of supply centers are both decision variables. To solve these models, an improved particle swarm optimization algorithm with the Gaussian cloud operator, the Restart strategy, and the adaptive parameter strategy is developed. The algorithm is integrated with the interior point method to solve the second-stage model. The numerical example proves the effectiveness and efficiency of the conversion method for the stochastic model and the proposed strategies that improve the algorithm.

1. Introduction

A location-allocation (LA) problem, also known as facility location problem (FLP), involves locating a number of facilities to which customers are allocated to minimize the cost of satisfying customer demands. It is an important problem in supply chain or logistics management and greatly affects long-term transportation and storage decisions. Many enterprises and government departments focus on this problem to reduce cost and to improve efficiency, especially in emergency logistics given the frequent occurrence of disasters, epidemic, security incidents, and other emergencies nowadays. This problem has complex and uncertain features, such as the changing demands, allocations, and locations of customers or facilities.

Many studies have been conducted on the basic LA problem since the proposal of Cooper [1] in 1963. However, most of the studies were for deterministic cases and not for uncertain occurrences. In the recent two decades, several models for uncertain occurrences have been proposed and solved by different algorithms. Logendran and Terrell [2] first considered a stochastic uncapacitated LA problem and proposed an expected value model (EVM) to maximize the net profits. Carrizosa et al. [3, 4] proposed a LA problem that considers the locations of both customers and facilities, which may be regions that have several probability distributions. Liu [5, 6] contributed to the uncertainty theory by proposing three stochastic models [7] and three fuzzy programing models [8] for the capacitated LA problem. A hybrid intelligent algorithm that consists of a network simplex algorithm, a simulation, and a genetic algorithm was developed to solve the stochastic and fuzzy models above. Silva and De La Figuera [9] studied the capacitated facility location problem with constrained backlogging probabilities and solved it using a heuristic method based on a reactive greedy adaptive search procedure. Wang and Shi-Wei [10] proposed a robust optimization model for a logistics center LA problem and compared it with stochastic and deterministic optimization models. Two algorithms, namely, an enumeration method and a genetic algorithm, were adopted to solve the problem. Yao et al. [11] considered a joint facility LA and inventory problem with stochastic demands. The problem involves identifying the best locations of warehouses and the inventory levels and allocating customers. A heuristic integrated approximation and transformation technique was developed to solve the problem. Wen and Kang [12] considered a facility LA problem with random fuzzy demands. They proposed a hybrid intelligent algorithm, similar to the method of Zhou and Liu [8], which consists of the simplex algorithm, a random fuzzy simulation, and a genetic algorithm. A similar method was also adopted by Mousavi and Niaki [13] in solving a LA problem with a fuzzy variable and customer location demands, which were normal random variables. Vidyarthi and Jayaswal [14] proposed a nonlinear integer programing model to solve a LA problem with immobile servers, stochastic demands, and congestions. Pereira et al. [15] presented a probabilistic maximal covering LA problem and proposed a hybrid algorithm to solve it. They formulated a linear programing model to efficiently solve small and medium problems and a flexible adaptive large neighborhood search heuristic to solve large problems. Alizadeh et al. [16] considered a capacitated multifacility LA problem with stochastic demands. The capacitated subsources of each facility could be utilized when the number of demand points, that is, the planned total requirements, are exceeded. Alizadeh et al. [17] transformed the mixed-integer nonlinear programing model to a simple formulation model and proposed a genetic algorithm (GA) and a colonial competitive algorithm (CCA) to solve medium and large problems.

In this study, we consider a two-stage stochastic capacitated LA problem (SCLAP) in the context of emergency logistics. In the management of emergency logistics, the core problem is utilizing the limited relief supplies rapidly and efficiently. Hence, predesigned supply centers, that is, emergency logistics distribution centers, are important. The “appropriate” number, size, and location of supply centers have become a comprehensive decision problem that should not be addressed separately. The demands of different customers are mostly uncertain and depend largely on different scenarios. Hence, we focus on the uncertainty in demand quantity, which is assumed as the only independent stochastic variable in this paper that follows a given regular stochastic distribution. To solve SCLAP two conditions must be satisfied: the constraint of stochastic quantity demands and the minimization of the generalized cost. The generalized cost consists of two parts, namely, the sum of the costs of building and maintaining supply centers and the stochastic costs of transportation to each customer from each supply center. Therefore, we determine the appropriate number, capacities, and locations of supply centers in this paper.

In a traditional capacitated LA problem (CLAP), each customer can be supplied by existing supply centers and can be supplied by more than one center at the same time. Hence, the problem becomes NP-hard and difficult to solve [7]. Moreover, in the two-stage SCLAP, the number, the capacities, and the locations of supply centers are considered decision variables. The research on this model is extremely weak, and few relevant papers have been found [1823]. Furthermore, most of the research considered the number and the capacities of supply centers separately, and no research that considered both variables together has been found.

To efficiently solve this model, an improved particle swarm optimization (PSO) algorithm is proposed. The algorithm consists of three improvement strategies: the Gaussian cloud operator, the Restart strategy, and the adaptive parameter strategy. The second stage of the problem is modeled as a linear program. Hence, we adopt the interior point method of the time-consuming simplex method. We convert the initial stochastic programing model to a crisp model, thus reducing the computing time dramatically based on the assumption of the demands with independent regular distributions and the uncertainty theory proposed by Liu [6].

The remainder of this paper is organized as follows. Section 2 describes the random EVM and the crisp model for the two-stage SCLAP. Section 3 presents the details of the hybrid algorithm solution to the model. Section 4 introduces a case study of the new model and verifies the algorithm efficiency with the improvement strategies. Section 5 concludes with the contributions and innovations of this paper and presents the future research directions.

2. Model Formulation

2.1. Problem Description and Theoretical Foundation

To model the two-stage SCLAP, the following assumptions should be considered: the graph of all the nodes is a complete graph, each customer node can be connected with all supply nodes but cannot be connected with another customer node, the weight of each edge between two nodes is measured by the Euclidian distance plus the transportation volume, the locations of customer nodes are fixed and the demand quantities are stochastic, and the capacity constraint is only imposed on supply nodes. The notation and variables used in the following formulations are defined in Descriptions of Notations and Variables.

To model the two-stage SCLAP, we first apply the EVM introduced by Zhou and Liu [7] to the SCLAP. Then, we extend the classic one-stage EVM to a two-stage model and provide a deterministic equivalent form. We introduce several basic definitions, theorems of probability, and uncertainty theories.

Definition 1 (see [5]). Let be a nonempty set and the -algebra of the subsets (called events) of . The set function Pr is called a probability measure if it satisfies the following conditions.

Axiom 1 (normality).

Axiom 2 (nonnegativity).

Axiom 3 (countable additivity). For every countable sequence of mutually disjoint events , we obtain

Definition 2 (see [5]). Let be a nonempty set, the -algebra of the subsets of , and Pr the probability measure. Then, the triplet is called a probability space.

Definition 3 (see [5]). A random variable is a measurable function from the probability space to the set of real numbers; that is, for any Borel set of real numbers, the set is

Liu [6] also proposed the definitions for measure inversion theorem, regular uncertainty distributions, and inverse uncertainty distribution as supplements to the uncertainty theory. The uncertainty distributions are specifically presented as stochastic distributions, and the uncertainty measure can also be replaced by the probability measure Pr. He proposed several theorems that can help in the conversion of the stochastic model into a crisp model based on these definitions. The related definitions and theorems are listed as follows.

Definition 4 (see [6] measure inversion theorem). Let be an uncertain variable with an uncertainty distribution . Then, for any real number , we have

Definition 5 (see [6] regular uncertainty distribution). An uncertainty distribution is regular if it is a continuous and strictly increasing function with respect to at which and

Definition 6 (see [6] inverse uncertainty distribution). Let be an uncertain variable with a regular uncertainty distribution . Then, the inverse function is the inverse uncertainty distribution of .

Theorem 7 (see [6]). Let be independent uncertain variables with regular uncertainty distributions , respectively. If is a strictly increasing function, then has an inverse uncertainty distribution

Theorem 8 (see [6]). Let and be independent normal uncertain variables and , respectively. Then, the sum of is also a normal uncertain variable ; that is,

2.2. Initial Expected Value Model

The mathematical formulation, which is an expected value model (EVM), proposed by Zhou and Liu for the initial SCLAP can be defined as follows [7]:which is subject towhere denotes the expected value of uncertain variable . If (9) will be reformulated as follows:

In this mathematical model, the decision variable is the location of the supply centers , and the objective function is the minimum value of the sum of the demand and distance from each supply center to the customer in (9) or (16). The first constraint (see (10)) states that a customer’s stochastic demand must be satisfied. The second constraint (see (11)) states that the capacity of each supply center must be sufficient to supply all customers with demands. Equations (12), (13), and (14) are used to keep the variables within a certain range.

According to Zhou and Liu [7], the EVM can be easily solved by a hybrid intelligent algorithm that consists of the network simplex algorithm, stochastic simulations, and the GA. However, in emergency logistics management, in addition to making a decision on supply center locations, the number and capacities of supply centers should also be considered. Therefore, the SCLAP should be extended into a two-stage model. Variables and are both fixed in the initial EVM, whereas, in the two-stage EVM, these variables are decision variables.

2.3. Two-Stage Expected Value Model

In the two-stage EVM for SCLAP, the decision variables extend to the number , the capacities , and the locations of supply centers . The first stage aims to determine the minimum value of the generalized cost and generates the number and capacity values for the second stage. The second stage utilizes the number and capacity values from the first stage and determines the minimum value of the stochastic demand cost of transportation. In addition, to provide a comprehensive description of the optimization objective in emergency logistics, we introduce a nonlinear function based on the objective function in the initial model to calculate the generalized cost. The two-stage EVM for SCLAP can be formulated as follows.

First stageis subject toSecond stageis subject towhere () denotes the fixed cost of constructing a supply center and denotes the variable cost coefficient to maintaining an operating supply center. Equation (17) represents the expected generalized cost. The first portion, , is the sum of the cost of construction and the cost of maintaining the supply centers, and the latter portion, , is the sum of the costs of the demands for transportation from each supply center to the customer, similar to the initial EVM. The first constraint in (18) ensures that the capacity of each supply center is limited within a reasonable range from to . Equation (19) stipulates that the total capacity of supply centers must exceed the total expected demands of customers regardless of the value of or the number of supply centers . The formulations in the second stage denote the same items as those in the initial model, except that fixed constants and become variables.

2.4. Deterministic Equivalent Models

Evidently, the two-stage EVM model is much more complicated than the initial model, and the stochastic simulation method is extraordinarily time-consuming because the number and the capacities of supply centers are both unknown. However, on the basis of the definitions and theorems in Section 2.1, if we assume that the demand variables are nonnegative normal distributions, that is, , we can convert the two-stage EVM of the SCLAP into a deterministic two-stage model. The following equivalent formula of the first stage model can be expressed as and subject to

Proof. Customer demand follows the normalized distribution (Theorem 8), which belongs to the regular uncertainty distribution, and is a strictly increasing function. Hence, one has the following.
(1) Objective conversion is as follows: (2) Constraint conversion is as follows: and according to Theorem 8The following equivalent formula of the second-stage model can be established as and subject to

Proof. Customer demand follows the normalized distribution (Theorem 8), which belongs to the regular uncertainty distribution. Hence, one has the following.
(1) Objective conversion is as follows: (2) Constraint conversion is as follows:After the conversion of the two-stage EVM, we can obtain a two-stage deterministic model, which can be solved much easily. Similarly, we can also convert the chance-constrained programing (CCP) model into a deterministic model. Assuming that the supply centers satisfy the demand of customer with a probability , the total supply capacity satisfies the total demand of customers with probability , and the demand variable complies with the same kind of regular stochastic distribution , the CCP of the two-stage SCLAP can be formulated as follows (only the values different from the EVM are listed for simplicity).
Equation (19) is reformulated asand (21) is reformulated as According to Theorem 7, the stochastic function is strictly increasing. Therefore, one has the following.
Equation (33) can be converted intoand (34) can be converted intoHowever, if the SCLAP is a dependent-chance programing (DCP) model (only the values different from the EVM are listed for simplicity), (17) is reformulated asand (19) is reformulated asThe objective is to maximize the probability, which cannot be converted into a deterministic model. Hence, the stochastic simulation process must be activated to calculate the probability. In this paper, we only focus on the EVM and develop an improved PSO (IPSO) algorithm to solve this problem given that the two-stage SCLAP is newly proposed.

3. Improved IPSO

The LAP has been proven to be NP-hard [24]. Hence, it is a two-stage SCLAP. Heuristic methods are the best at handling large NP-hard problems, especially under uncertain environments. We develop an improved IPSO combined with the interior point method [25] to solve the deterministic equivalent conversion of the two-stage EVM.

3.1. Design of the IPSO

In 1995, Kennedy and Eberhart [26] first proposed a swarm intelligence optimization algorithm, which was inspired by the flocking of birds and schooling of fish against predation, called the particle swarm optimization (PSO). PSO has been successfully applied to parameter optimization [27], combinatorial optimization [28], pattern recognition [29, 30], data mining [31], and other fields [32, 33]. However, local optimal solution is often obtained when general PSO algorithm is applied to solve the continuous location of distribution centers. Therefore, scholars have proposed many improvement strategies for a much efficient and effective PSO [3437]. Recently, Zhan et al. [38] proposed an improved PSO based on neighbor heuristic and Gaussian cloud learning, and the results proved its superiority over many PSO variants. We also adopt a Gaussian cloud operator combined with an adaptive parameter strategy and a Restart strategy to improve the general PSO algorithm of the two-stage SCLAP. For a detailed description of the IPSO algorithm, an algorithm flow chart is provided as a supplement in Figure 1.

Figure 1: IPSO algorithm flow chart.

In Figure 1, the loop program is repeated for times, and three improvement strategies, which are shown in the curved boxes, are adopted. If no better solution is found during , the Restart strategy will be activated; otherwise, the Gaussian cloud operator will be activated. In addition, several IPSO parameters will adaptively change along with the iteration (the adaptive parameter strategy).

Based on (27), the range of supply center number can be determined by where [] denotes taking the integer portion of . The capacity of each supply center must be within [], which means that if a candidate solution of exceeds , a penalty will be given in the general cost function, and if is less than , it should be reset to . Therefore, the minimum number exists but the maximum number does not. The cost of building and maintaining supply centers will be greatly wasted if the total capacity of supply centers exceeds the actual expected demands of customers, especially in the EVM and its crisp variant model. Fixed value , which denotes building a supply center, is the decision factor in choosing the best number for the optimal solution (discussed in Section 4). For each , the main procedure of IPSO can be described as follows.

Step 1. Initialize each individual of the particle swarm population with a given size as follows:where must satisfy the constraints (introduced in Section 3.3), which are generated randomly as in Algorithm 1. TD denotes the total demand of customers and E(TD) denotes the expected value equal to .

Algorithm 1: The initialization algorithm.

Step 2. Encode the initial individuals into normalized values to update the particle swarm. The normalization formula is as follows:where and denote the upper-bounds of the location coordinates and and denote the lower-bounds of the location coordinates. Specifically, we set the position coordinate bounds as follows:Then, the particle individuals are normalized, and is equal to the position of particle in the continuous search space.

Step 3. Set the parameters for the IPSO algorithm operating at each iteration. These parameters are listed in Table 1.
, and change adaptively at each iteration. This phenomenon is called adaptive unfixed phenomenon, which is discussed in Section 3.2. The positions are set between 0 and 1 because the individuals have been normalized. The speed limitation can be set to any reasonable symmetric interval. It is set at . The dimensions of the position and the speed vectors are both equal to 3.

Table 1: Description of parameters in the IPSO algorithm.

Step 4. Check the constraints for each individual. If feasible, calculate the fitness values of the second and first stage objective functions. The second-stage programing is linear programing, which is solved by the interior point method and discussed in Section 3.3. Otherwise, regenerate the individual using the initialization process in Algorithm 1.

Step 5. Update the position and speed of particles using different strategies. If the global optimum does not improve within a given cumulative iteration , the Restart strategy is activated to reinitialize all individuals based on the global best-so-far solution and the Gaussian cloud model. Otherwise, the Gaussian cloud operator is activated to update the particles based on the global best-so-far solution and the individual best-so-far solution. The two updating strategies are discussed in Section 3.2.

Step 6. Repeat Steps for .

Step 7. Decode the global best-so-far solution into the initial range and generate the global optimum. The inverse normalization formula is as follows:

3.2. Improvement Strategies
3.2.1. Gaussian Cloud Operator

The Gaussian cloud operator is an algorithm application of the membership clouds. The membership cloud was first proposed by Deyi et al. [39], a member of the Chinese Academy of Sciences. The method bridges the gap between quantitative methodology and qualitative methodology based on the fuzzy set theory and has been successfully applied in algorithm improvement [38, 40]. A classic Gaussian cloud is described in Figure 2.

Figure 2: Classic Gaussian cloud with expectation Ex, entropy En, and hyperentropy He.

Assuming , we have , and the particle population is . Each particle position vector in a -dimension solution space () represents a potential solution, which is accordingly associated with a speed vector . The individual best-so-far solution of each particle can be expressed as and the global best-so-far solution can be expressed as based on the previous fitness value of each particle. In a traditional PSO updating process, and are used as heuristic factors in where , and other notations are shown in Table 1. However, with the Gaussian cloud operator, (39) will be modified as where , , and are the cloud drop generator functions, which can be described as follows:

Step 1. Generate a normal random number based on the expectation value En, and the standard deviation value is He, that is, .

Step 2. Generate a normal random number based on the expectation value Ex, and the standard deviation value is ; that is, .

We do not replace each dimension of initial with cloud drop because it may induce excessive noise and disturbance. The reasonable approach is to replace one random th-dimension of the initial at each iteration.

3.2.2. Restart Strategy

The Restart strategy is another method to keep the algorithm from premature convergence. If the global optimum does not improve within time (), that is, , the particle population is disrupted and reinitialized based on . In , we also use the Gaussian cloud operator to reinitialize the different input parameters as follows:where and . Subsequently, the position vector should be checked and modified, as in Algorithm 1, to satisfy the limitation constraint.

3.2.3. Adaptive Parameter Strategy

Parameter setting always significantly affects the intelligent algorithm performance. The main related parameters of the IPSO algorithm are as follows: the number of iteration , the population size (), the inertia weight (), the acceleration factors ( and ), the entropy (En), and the hyperentropy (He) in the Gaussian cloud operator. and are relatively fixed and have strong effects on computing time. Hence, we only utilize the adaptive parameter strategies on , , , En, and He.

The inertia weight () represents the particle inheritability of the previous speed, that is, a larger means a stronger ability in global searching, whereas a smaller means a stronger ability in local searching. A linear decreasing inertia weight (LDIW), which was first proposed by Shi and Eberhart [41], is adopted in where is set to 0.4 and is set to 0.9.

The acceleration factors ( and ) represent the particle weights on the attractiveness of and , respectively, which also reflect the balance between global and local searching. A typical setting is . Several studies also proposed other effective settings, such as and , [42]. We propose a linear adaptive parameter strategy that can be formulated as shown in As a result, the algorithm will have a larger and a smaller at the beginning, which benefit global searching, and a smaller and a larger in the latter stages, which benefit local searching.

The two important parameters in the Gaussian cloud operator, which are called the entropy En and the hyperentropy He, could also be changed adaptively. The mutation of a random dimension value is more beneficial at the beginning of the iteration than at the later time. Hence, we propose a nonlinear adaptive parameter strategy for En and He as follows:where is set to 0.05 based on multiple tests.

3.3. Interior Point Method

To rapidly solve the second-stage programing ((29) and (30)), we adopt the interior point method instead of the simplex method in this paper. The interior point method was first proposed by Karmarkar [25] in 1984. It is a polynomial algorithm for linear programing that requires arithmetic operations on bit numbers in the worst case, where is the number of variables and is the number of bits in the input. Compared with the simplex method, it is very suitable for solving large-scale problems because the computing time does not increase with the size of the problem. Moreover, it is not strict with the initial point requirements of quadratic convergence and robustness. The improvement of the algorithm efficiency is verified in Section 4.3.

4. Numerical Example

4.1. Data and Implementation

In this section, a 20-customer network of allocation and transportation in emergency logistics is designed, and the decision objective is to determine the number, capacities, and locations of supply centers for material support. The customer locations and their stochastic demands are listed in Table 2.

Table 2: Customer locations and stochastic demand distributions.

We adopt the stochastic simulation method [7] to solve the two-stage SCLAP, and the simulation number of time is set to 100 for a fair comparison with the traditional EVM. The generalized cost is significantly affected by fixed cost and variable cost coefficient . Hence, we test eight pairs of different values for comparison and analysis. The eight pairs are divided into two classes using the controlling variable method for testing. The values are listed in Table 3.

Table 3: Fixed cost and variable cost coefficient .

The experiments are conducted on a laptop with Intel® Core™ i7-6700HQ 2.60 GHz quad-core processors, and the algorithms are coded in MATLAB R2016a environment. For convenience, we adopt the built-in function linprog in MATLAB to achieve linear programing with the simplex method or the interior point method for the second-stage optimization problem.

4.2. Computational Results
4.2.1. Results of the Deterministic Model

We can achieve an optimal solution for the deterministic model from the initial EVM using the IPSO. is set to 10 and is set to 500 for the IPSO algorithm. The capacity limitation of supply centers is set as and . The fixed cost and the variable cost coefficient for the generalized cost function are set as = 500 and , respectively. The results are listed in Table 4 where the number of supply centers will accordingly change from two to four. Cap. denotes the capacity and Pos. denotes the position of each supply center. GC denotes the generalized cost of each solution with different numbers of centers and CT denotes the computing time of each situation.

Table 4: Results of the deterministic model.

The best decision solution is to build three supply centers, and the optimum generalized cost is 3643.147. We also present the convergence curves of different numbers in Figure 3 and illustrate the solutions with LA maps in Figures 46. In addition, the amount of traffic is simply represented with different shades of color.

Figure 3: Convergence curves with different number of supply centers.
Figure 4: LA map with = 2.
Figure 5: LA map with = 3.
Figure 6: LA map with = 4.

In the traditional one-stage problem, the number and capacity of a supply center are fixed, whereas the number (p) and capacity (Cap.) are decision variables in this problem. Moreover, the capacity has a range. The results of the two-stage SCLAP can provide significant practical conclusions for actual emergency logistics because the scarcity of supply materials is extremely serious and urgent such that the number and capacity of supply centers must be determined accurately based on practical situations. Therefore, we must achieve a balance between minimizing the number of centers and minimizing the distance of transport.

4.2.2. Results of the EVM with Stochastic Simulation

Stochastic simulation is time-consuming and extremely sensitive to the number of simulation iterations (). Only when is sufficiently large can the solution become stable. As a result, is set to 100 and a parallel strategy, that is, a parfor loop with the MATLAB software, is adopted. For each simulation, the random demands of all customers are generated, and the programing model is solved. The IPSO obtains a stable optimal solution based on the expected value of the generalized cost at each iteration. The results of the EVM with stochastic simulation are listed in Table 5.

Table 5: Results of the EVM with stochastic simulation.

Table 5 shows that the best decision solution is also to build three supply centers, and the optimum generalized cost is 3660.836. The GC values are slightly larger than those in Table 4 because the simulation inevitably induces several random factors. Thus, obtaining the theoretical optimal solution is impossible. Cap. and Pos. are also different than those in Table 4 for the same reason. However, the CT values are significantly larger than those in the deterministic model, which implies that the deterministic model can dramatically improve the computational efficiency.

4.2.3. Parameter Analysis of Generalized Cost

The generalized cost of the optimal solution is dramatically affected by fixed cost and variable cost coefficient . We design two experiments based on the data in Table 3 to demonstrate this phenomenon, and the number of supply centers () is set from 2 to 8 for each class testing. The experiments are carried out by the IPSO algorithm with the same parameters for 10 times. The best results are recorded in Table 6 and shown in Figures 7 and 8.

Table 6: Best results of the 10 tests with different or .
Figure 7: Changes in GC with different fixed cost values ().
Figure 8: Changes in GC with variable cost coefficient ( = 400).

In Table 6, the bold values represent the minimum GCs under different values. The most suitable number of supply centers should be built according to the given fixed cost value and variable cost coefficient . Parameter plays an important role in determining the best . The best increases when decreases, as shown in Figure 7. In addition, the GC value nonlinearly increases with . Therefore, if the number of supply centers exceeds the actual required number based on the actual total demand, a large cost will be wasted. With variable cost coefficient , the GC value increases linearly with and has no effect on the best decision, as shown in Table 6 and Figure 8. As a result, the most logical and economical approach is to reduce the fixed cost value of building each supply center.

4.3. Comparison and Analysis of the Algorithms

To provide a fair comparison of the IPSO and PSO algorithms, we use the same data in Table 2 and the common parameter settings in Section 4.2.1 to solve the two-stage EVM. However, in the basic PSO algorithm, the fixed parameters are set as follows: and . The test is repeated 10 times, and the statistical results are presented in Table 7. Avg GC denotes the average GC value of the 10 tests, and the Best GC denotes the best value of these tests. Avg Time denotes the average running time. Table 7 shows that the IPSO algorithm is better than the basic PSO algorithm in both convergence accuracy and efficiency, which means that the improvement strategies are effective.

Table 7: Statistical results of the different algorithms used to solve the two-stage EVM.

The interior point method is adopted in the IPSO algorithm. However, its advantage over the simplex method must also be verified. As a result, we test the IPSO algorithm with the simplex method in solving the second-stage programing of the deterministic model. The results are listed in row 3 of Table 7 with an asterisk. The Avg GC and Best GC results obtained by the simplex method are close to the results of the interior point method. However, Avg Time shows that the interior point method is much better than the simplex method in computing efficiency.

To further illustrate the applicability and superiority of the proposed IPSO algorithm, additional tests, based on the classic one-stage EVM, are performed. Solving the classic one-stage EVM is the same as solving the second stage of a two-stage EVM with known and . As a result, the initial individuals are recoded and is changed to , where is the given number of supply centers, which is 4, and becomes a known variable []. Zhou and Liu [7] developed a hybrid intelligent algorithm (HIA) that consists of a network simplex algorithm, a simulation, and a genetic algorithm to solve this model. HIA is used to compare the proposed IPSO algorithm and the classic PSO and algorithms. The PSO, IPSO, and algorithms share the same parameter setting with the previous test. HIA parameters are set as in [7]; that is, the crossover probability is 0.3, the mutation probability is 0.2, the population size is 10, the iteration number is 500, and the simulation number is 500. The numbers of customers and locations in the test data are completely consistent with the data in Table 2. However, the demand distributions are different. The expected demand for the first 10 customers is 5, that for the last 10 customers is 10, and the customer demand variance is 1. The test is repeated 10 times, and the results are recorded in Table 8. Then, some statistical results are calculated and presented in Table 9, where Best GC denotes the minimum value of general cost as shown in (20), Avg GCE denotes the average percent error of the general cost, Best CT denotes the minimum value of computing time, and Avg CTE denotes the average percent error of computing time. The percent error is calculated as (actual value − optimal value)/optimal value × 100%, which can be used to show the deviation from the optimal solution.

Table 8: Results of the 10 tests of different algorithms solving the classic one-stage EVM.
Table 9: Statistical results of different algorithms solving the classic one-stage EVM.

The results in Tables 8 and 9 show that the performance of the IPSO algorithm is the most superior in terms of convergence accuracy and convergence speed. In addition, on the basis of the Avg GCE and the Avg CTE, IPSO is the most stable algorithm among the four in terms of accuracy. It also performs stably in terms of computing time. Adopting the simulation method for HIA results in spending excessive time in the evaluation cycle. Hence, the computing time is much longer than those of the other three algorithms.

In addition, the two tests also illustrate the difference between the two EVMs. The proposed two-stage EVM assumes that the supply center number, capacities, and locations are decision variables. This approach is applicable in emergency logistics because it avoids wasting emergency resources for inappropriate number of supply centers or capacities. However, in the classic one-stage EVM, the numbers of centers and capacities are fixed. If the fixed cost is set at 500 and the variable cost coefficient is set at 0.1, we can also obtain the same general cost with the two-stage EVM. The first stage objective in the two-stage EVM is 4147 when the optimal solution has a one-stage EVM general cost of 2132, as shown in Table 9. However, if we initially use the two-stage EVM, we can obtain the first stage objective optimal solution of 4076. The optimal decision is to set three supply centers with capacities of 30, 65, and 55, and the second-stage objective is 2561. Evidently, the total cost obtained by the two-stage EVM is lower and much reasonable than the classic one-stage EVM.

5. Conclusion

In this paper, a two-stage SCLAP in the context of emergency logistics is considered. In this problem, the number and capacities of supply centers are uncertain and must be determined. This condition is practical in emergency logistics compared to general logistics, which assumes that the number and capacities of supply centers are explicit and fixed. To solve this problem, a two-stage EVM and a generalized cost function are proposed. However, we convert the initial model into a deterministic model based on the stochastic theory and the uncertain theory proposed by Liu [5, 6] given that the traditional stochastic programing with a simulation method is time-consuming and unstable. To solve this model, we develop an IPSO algorithm and test it based on well-known 20-customer data. In this IPSO algorithm, three improvement strategies are introduced, namely, the Gaussian cloud operator, the Restart strategy, and the adaptive parameter strategy. In addition, we adopt the interior point method instead of the simplex method [7] to solve the second stage of programing. The comparison tests prove that the methods can improve the precision and convergence rates dramatically. The tests reveal the difference between the classic one-stage EVM and the two-stage EVM proposed in this paper and verify that the latter is much reasonable for emergency logistics. We also perform an analysis of the parameters in the proposed generalized cost function, drawing a conclusion that the fixed cost value has an important influence on the decision on the best number of supply centers.

However, this study has the following limitations. () The conversion of the deterministic model is only possible when the demand of the customer follows a normal distribution. () In this paper, only the EVM is converted and studied. According to different optimization problems, stochastic programing can also be constructed as other models, such as the CCP model and the DCP model. However, only the CCP model can be converted into a deterministic model. () In actual emergency logistics, the transportation cost between two nodes is probably uncertain and can also be assumed as a stochastic variable. However, it is taken as a deterministic variable in this paper and can be a significant research direction in the future.

In addition, several interesting subjects should be considered in future research. For example, a new SCLAP that builds new centers and opens old centers must be considered. The objective must be to choose the best locations to build new centers and find the most appropriate old centers to close simultaneously. Other models with multiple objectives, complex constraints, or fuzzy variables can also be considered. The uncertain CLAP can also be solved with robust optimization, which is becoming a popular new research topic.

Descriptions of Notations and Variables

:The supply center index set; when in the initial SCLAP model, and when in the two-stage SCLAP model
:The customer index set in the initial SCLAP model,
:The location of supply center , which belongs to a 2D real vector space and
:The location of customer , which belongs to a 2D real vector space
:The unit transportation cost of service from to , which is equal to the transportation volume
:The capacity of supply center
:The solution space of transportation volumes under scenario
:The stochastic demand variable of customer under scenario
:The number of supply centers in the two-stage SCLAP model
:The capacity set of all supply centers in the two-stage SCLAP model; .

Conflicts of Interest

The authors declare that they have no conflicts of interest.

Acknowledgments

This work was supported by the preeminent youth fund of Jiangsu Province in China (no. BK20150724).

References

  1. L. Cooper, “Location-allocation problems,” Operations Research, vol. 11, pp. 331–343, 1963. View at Publisher · View at Google Scholar · View at MathSciNet
  2. R. Logendran and M. P. Terrell, “Uncapacitated plant location-allocation problems with price sensitive stochastic demands,” Computers & Operations Research, vol. 15, no. 2, pp. 189–198, 1991. View at Google Scholar
  3. E. Carrizosa, E. Conde, M. Muñoz-Marquez, and J. Puerto, “The generalized Weber problem with expected distances,” RAIRO—Operations Research, vol. 29, no. 29, pp. 35–57, 1995. View at Google Scholar
  4. E. Carrizosa and J. Puerto, “A note on the optimal positioning of service units,” Operations Research, vol. 46, no. 1, pp. 155-156, 1998. View at Google Scholar
  5. B. Liu, Theory and Practice of Uncertain Programming: Physica-Verlag, 2002. View at Publisher · View at Google Scholar
  6. B. Liu, “uncertainty theory,” Studies in Computational Intelligence, vol. 154, no. 3, pp. 1–79, 2015. View at Google Scholar
  7. J. Zhou and B. Liu, “New stochastic models for capacitated location-allocation problem,” Computers & Industrial Engineering, vol. 45, no. 1, pp. 115–125, 2003. View at Google Scholar
  8. J. Zhou and B. Liu, “Modeling capacitated locationallocation problem with fuzzy demands,” Computers & Industrial Engineering., vol. 53, no. 3, pp. 454–468, 2007. View at Google Scholar
  9. F. J. F. Silva and D. S. De La Figuera, “A capacitated facility location problem with constrained backlogging probabilities,” International Journal of Production Research, vol. 45, no. 21, pp. 5117–5134, 2007. View at Publisher · View at Google Scholar · View at Scopus
  10. B. H. Wang and H. E. Shi-Wei, “robust optimization model and algorithm for logistics center location and allocation under uncertain environment,” Journal of Transportation Systems Engineering & Information Technology, vol. 9, no. 2, pp. 69–74, 2009. View at Google Scholar
  11. Z. Yao, L. H. Lee, W. Jaruphongsa, V. Tan, and C. F. Hui, “Multi-source facility location-allocation and inventory problem,” European Journal of Operational Research, vol. 207, no. 2, pp. 750–762, 2010. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  12. M. Wen and R. Kang, “Some optimal models for facility location-allocation problem with random fuzzy demands,” Applied Soft Computing, vol. 11, no. 1, pp. 1202–1207, 2011. View at Publisher · View at Google Scholar · View at Scopus
  13. S. M. Mousavi and S. T. Niaki, “Capacitated location allocation problem with stochastic location and fuzzy demand: a hybrid algorithm,” Applied Mathematical Modelling. Simulation and Computation for Engineering and Environmental Systems, vol. 37, no. 7, pp. 5109–5119, 2013. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  14. N. Vidyarthi and S. Jayaswal, “Efficient solution of a class of location-allocation problems with stochastic demand and congestion,” Computers & Operations Research, vol. 48, no. 9, pp. 20–30, 2013. View at Google Scholar
  15. M. A. Pereira, L. C. Coelho, L. A. N. Lorena, and L. C. D. Souza, “A hybrid method for the Probabilistic Maximal Covering Location-Allocation Problem,” Computers & Operations Research, vol. 57, pp. 51–59, 2015. View at Google Scholar
  16. M. Alizadeh, I. Mahdavi, S. Shiripour, and H. Asadi, “A nonlinear model for a capacitated location-allocation problem with bernoulli demand using sub-sources,” International Journal of Engineering, vol. 26, no. 9, pp. 1025–2495, 2012. View at Google Scholar
  17. M. Alizadeh, I. Mahdavi, N. Mahdavi-Amiri, and S. Shiripour, “A capacitated location-allocation problem with stochastic demands using sub-sources: an empirical study,” Applied Soft Computing Journal, vol. 34, article 2973, pp. 551–571, 2015. View at Publisher · View at Google Scholar · View at Scopus
  18. A. Klose, “An LP-based heuristic for two-stage capacitated facility location problems,” Journal of the Operational Research Society, vol. 50, no. 2, pp. 157–166, 1999. View at Google Scholar
  19. A. Marín and B. Pelegrín, “Applying Lagrangian relaxation to the resolution of two-stage location problems,” Annals of Operations Research, vol. 86, no. 1–6, pp. 179–198, 1999. View at Google Scholar
  20. Y. K. Liu and X. Zhu, “Capacitated fuzzy two-stage location-allocation problem,” International Journal of Innovative Computing Information & Control Ijicic, vol. 3, no. 4, pp. 987–999, 2007. View at Google Scholar
  21. G. A. Ghodsi, A Lagrangian Relaxation Approach to a Two-Stage Stochastic Facility Location Problem with Second-Stage Activation Cost, University of Waterloo, Ontario, Waterloo, Canada, 2012.
  22. A. Adibi and J. Razmi, “2-Stage stochastic programming approach for hub location problem under uncertainty: A case study of air network of Iran,” Journal of Air Transport Management, vol. 47, pp. 172–178, 2015. View at Publisher · View at Google Scholar · View at Scopus
  23. J.-P. Arnaout, “Ant colony optimization algorithm for the Euclidean location-allocation problem with unknown number of facilities,” Journal of Intelligent Manufacturing, vol. 24, no. 1, pp. 45–54, 2013. View at Publisher · View at Google Scholar · View at Scopus
  24. N. Megiddo and K. J. Supowit, “On the complexity of some common geometric location problems,” SIAM Journal on Computing, vol. 13, no. 1, pp. 182–196, 1984. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  25. N. Karmarkar, “A new polynomial-time algorithm for linear programming,” Combinatorica, vol. 4, no. 4, pp. 373–395, 1984. View at Publisher · View at Google Scholar · View at MathSciNet
  26. J. Kennedy and R. Eberhart, “Particle swarm optimization,” in proceeding of the IEEE International Conference on Neural Networks, vol. 4, pp. 1942–1948, 1995.
  27. J. Ciurana, G. Arias, and T. Ozel, “Neural network modeling and particle swarm optimization (PSO) of process parameters in pulsed laser micromachining of hardened AISI H13 steel,” Materials & Manufacturing Processes, vol. 24, no. 3, pp. 358–368, 2009. View at Publisher · View at Google Scholar · View at Scopus
  28. B. Liu, L. Wang, and Y. Jin, “An effective PSO-based memetic algorithm for flow shop scheduling,” IEEE Transactions on Systems, Man, and Cybernetics, Part B: Cybernetics, vol. 37, no. 1, pp. 18–27, 2007. View at Publisher · View at Google Scholar · View at Scopus
  29. X. Li and J. Wang, “A steganographic method based upon JPEG and particle swarm optimization algorithm,” Information Sciences, vol. 177, no. 15, pp. 3099–3109, 2007. View at Publisher · View at Google Scholar · View at Scopus
  30. M. G. Omran, A. Salman, and A. P. Engelbrecht, “Dynamic clustering using particle swarm optimization with application in image segmentation,” Pattern Analysis and Applications, vol. 8, no. 4, pp. 332–344, 2006. View at Publisher · View at Google Scholar · View at MathSciNet
  31. N. Holden and A. A. Freitas, “A hybrid PSO/ACO algorithm for discovering classification rules in data mining,” Journal of Artificial Evolution & Applications, vol. 2008, Article ID 316145, 11 pages, 2008. View at Google Scholar
  32. R. C. Eberhart and Y. Shi, “Particle swarm optimization: developments, applications and resources,” in proceeding of the Transactions on Evolutionary Computation (IEEE '01), vol. 1, pp. 81–86.
  33. X. Li and A. P. Engelbrecht, “Particle swarm optimization: an introduction and its recent developments,” the 15th annual Conference Companion on Genetic and Evolutionary Computation, pp. 33–57, 2007. View at Google Scholar
  34. A. Ratnaweera, S. K. Halgamuge, and H. C. Watson, “Self-organizing hierarchical particle swarm optimizer with time-varying acceleration coefficients,” IEEE Transactions on Evolutionary Computation, vol. 8, no. 3, pp. 240–255, 2004. View at Publisher · View at Google Scholar · View at Scopus
  35. J. J. Liang and P. N. Suganthan, “Dynamic multi-swarm particle swarm optimizer,” in proceeding of Swarm Intelligence Symposium (sis '05), pp. 124–129, 2005.
  36. T. Yamaguchi and K. Yasuda, “Adaptive particle swarm optimization–self-coordinating mechanism with updating information,” in proceeding of the IEEE International Conference on Systems, Man and Cybernetics (SMC '06), pp. 2303–2308, twn, October 2006. View at Publisher · View at Google Scholar · View at Scopus
  37. W. Liao and J. Wang, “Nonlinear inertia weight variation for dynamic adaptation in particle swarm optimization,” in proceeding of the Second international conference on Advances in swarm intelligence (ICSI '11), pp. 859–871, Chongqing , China, June 12-15.
  38. D. Zhan, H. Lu, W. Hao, and D. Jin, “Improving particle swarm optimization: Using neighbor heuristic and Gaussian cloud learning,” Intelligent Data Analysis, vol. 20, no. 1, pp. 167–182, 2016. View at Publisher · View at Google Scholar · View at Scopus
  39. L. Deyi, M. Haijun, and S. Xuemei, “Membership cloud and membership cloud generator,” Computer Research & Development, vol. 32, no. 6, 1995. View at Google Scholar
  40. Z. Guangwei, H. Rui, L. Yu, L. Deyi, and C. Guisheng, “An evolutionary algorithm based on cloud model,” Chnises Journal of Computers, vol. 31, no. 7, 2008. View at Google Scholar
  41. B. Y. Shi and R. Eberhart, “A modified particle swarm optimizer,” in proceeding of the 1998 IEEE World Congress on Computational Intelligence, pp. 69–73, 1998.
  42. A. Carlisle and G. Dozier, “An off-the-shelf PSO,” in proceeding of the Workshop on Particle Swarm Optimization.