Abstract

We solve a novel inventory-location model with a stochastic capacity constraint based on a periodic inventory control (ILM-PR) policy. The ILM-PR policy implies several changes with regard to other previous models proposed in the literature, which consider continuous review as their inventory policy. One of these changes is the inclusion of the undershoot concept, which has not been considered in previous ILM models in the literature. Based on our model, we are able to design a distribution network for a two-level supply chain, addressing both warehouse location and customer assignment decisions, whilst taking into consideration several aspects of inventory planning, in particular, evaluating the impact of the inventory control review period on the network configuration and system costs. Because the model is a very hard-to solve combinatorial nonlinear optimisation problem, we implemented two heuristics to solve it, namely, Tabu Search and Particle Swarm Optimisation. These approaches were tested over small instances in which they were able to find the optimal solution in just a few seconds. Because the model is a new one, a set of medium-size instances is provided that can be useful as a benchmark in future research. The heuristics showed a good convergence rate when applied to those instances. The results confirm that decision making over the inventory control policy has effects on the distribution network design.

1. Introduction

Distribution network design (DND) is one of the most important problems for companies that distribute products to their customers. The problem consists of selecting specific sites to install plants, warehouses, and distribution centres, assigning customers to serving and interconnecting facilities by flow assignment decisions. DND is typically solved as part of a sequential approach that simplifies associated tactical and operational issues. Hence, the decisions that have been omitted are tackled only after the DND problem has already been solved. This means that decisions related to the location of warehouses and allocation of customers are made without taking into account operational aspects such as transportation and inventory. This situation leads to suboptimal designs because operational decisions are restricted to the current network design.

This paper considers a two-level supply chain, in which a single plant serves a set of warehouses, which in turn serve a set of end customers or retailers. Unlike traditional approaches and in accordance with the recent inventory-location literature, in this paper, we incorporate the inventory control policy as a relevant factor that directly affects DND. A distinctive feature of our model is that it is based on a periodic inventory control policy for each distribution centre (DC) in a single-product scenario, which is important for those industries in which a continuous revision policy is not possible. Because our model is a very hard-to-solve nonlinear one, we have solved it using well-known heuristic approaches, namely, Tabu Search (TS) and Particle Swarm Optimisation (PSO). Moreover, as the model was recently developed by us, there is no set of instances that can be used as a benchmark. Thus, in this paper, we present two types of medium-sized instances. The first type considers uniformly distributed customers. The other one considers two clusters of customers. Warehouses are uniformly distributed in both types of instances.

The remainder of the paper is organised as follows. First, a literature review of the main ILM is presented in Section 2. Then, in Section 3, we present and analyse the stochastic capacity constraint under periodic review . At the end of that section, the formulation of the model is presented. Section 4 briefly presents a review of both TS and PSO algorithms and some features of the corresponding implementations. Section 5 starts explaining the procedure used to generate the set of instances presented in this paper. In Section 5.1, the obtained results are presented. Finally, in the last section, some conclusions based on the numerical results are outlined.

2. Literature Review

Many authors have been focused on the DND problem and on ILM in particular during the last 15 years. For instance Simchi-Levi and Zhao [1], Mourits and Evers [2], Bradley and Arntzen [3], Miranda and Garrido [4], and Miranda [5] all analysed the levels of decision making related to DND and supply chain management (SCM). Daskin [6], Simchi-Levi et al. [7], and Drezner and Hamacher [8] present detailed FLP reviews and analysis. However, the traditional structure of FLP is not useful for considering interactions between facility location and inventory decisions or the impact of the latter on network configurations. For example, the risk pooling effect states that the total system safety stock is reduced when customers are served by a smaller number of warehouses. Daskin et al. [9] and Shen et al. [10] incorporate an () inventory control policy into the widely studied uncapacitated facility location problem (UFLP), establishing a safety stock at each site. Daskin et al. [9] employ Lagrangian relaxation to solve the model, whereas Shen et al. [10] reformulate the model as a set-covering problem and solve it using a column generation method. Based on the same inventory control policy, Miranda and Garrido [4] consider the order quantity for each warehouse as a decision variable and the capacitated facility location problem (CFLP) as a base framework. Finally, in Miranda and Garrido [11] and Ozsen et al. [12] authors handle capacity constraints by using previous inventory-location models.

More recently, Kumar and Tiwari [13] have presented an ILM that incorporates the risk pooling effect for both safety stock and running inventory. Additionally, in their model, the authors consider the effect produced when warehouses and end customers work jointly. Tancrez et al. [14] have presented a three-level supply chain nonlinear ILM that integrates location, allocation, and shipment sizes. In [15], the authors developed a model for the DND problem that considers the short lifetime of perishable products. To solve their model, the authors implemented a Lagrangian relaxation. One paper that has addressed the DND problem using periodic 3 inventory review is presented in [16]. In their paper, the authors consider a () inventory policy, which is slightly different from the one that is considered in this work.

The authors have presented different techniques to solve these models. For instance, Bard and Nananukul [17] proposed a branch and price algorithm for an integrated production and inventory routing problem. In [18], Lagrangian relaxation was used to solve a mixed integer linear programming model for multiple echelon and multiple commodity supply chain network design. Heuristics have also been used to solve DND problems. For instance, Armentano et al. [19] TS is used to minimise the production and inventory cost in a model that integrates production and distribution decisions by considering the capacity constraints of the plant. Askin et al. [20] implemented an evolutionary algorithm to solve an ILM considering multicommodity and distribution planning decisions. In their paper, the authors present a very comprehensive description of their genetic algorithm. To the best of our knowledge, PSO has not been considered to solve ILM problems.

As we stated previously, in this paper, we have considered a two-level supply chain with a single plant, a set of warehouses, and a set of end customers or retailers. Because we incorporate the inventory control policy as a relevant factor that directly affects DND in this paper, based on a periodic inventory control policy for each Distribution Center (DC), this model can be considered as a variant of the models presented previously in the literature [4, 9, 12, 21], which considers a policy of continuous inventory review . In next section, the new model proposed in this work is presented.

3. Inventory-Location Model with Stochastic Capacity Constraint

The model presented in [11] optimises warehouse location and customer assignment decisions, taking into account fixed installation, transportation, inventory, and fixed ordering costs. The authors assume that each warehouse operates a continuous inventory review policy based on a policy to meet a stochastic demand, with mean (units of product per time unit) and variance . It is also assumed that the plant takes a lead time, , to fill incoming orders from warehouse . Stochastic constraint on inventory capacity is proposed, assuming a maximum inventory level for each warehouse . This constraint is based on chance constrained programming [22] and it fixes a maximum probability, , of violating inventory capacity at peak times, which occurs only when orders arrive at the warehouses. This inventory level corresponds to the reorder point RP, which is stated in order to satisfy demand during LT with at least a probability of , minus stochastic demand during LT, plus order quantity, . Thus, the inventory capacity constraint can be written as a deterministic nonlinear constraint as follows:

It may be noted that a similar and more conservative capacity constraint is proposed in [12], which ensures that inventory capacity is observed in 100% of the cases. However, when periodic review is considered, particularly assuming a inventory control policy [23], capacity constraint cannot be stated at any moment and does not take the same form as in (1). In a inventory control policy, inventory levels are reviewed only every period, and if the inventory level is lower than , then an order is submitted to reach the objective level . Consequently, order size must consider the well-known undershoot magnitude (US), which is the amount of items required in addition to to reach units of inventory. The average US as a function of demand mean and variance, and , and for a review period , can be computed as

In terms of inventory capacity constraints, peak inventory levels are not controlled at any time, but only at specific times for each review period. The peak inventory level is reached only when orders arrive at the warehouse LT time units after the last order and naturally only if an order was submitted to the central warehouse or plant. Consequently, when an order arrives at a warehouse, the inventory level is

This expression is not surprising, as when an order is submitted to the plant, it is necessary that the total inventory position (on hand plus on order inventory) reaches level and that LT time units later, the inventory level is reduced by the demand during the LT, . Similar to [11], in this paper, we propose that this constraint must be observed for each peak inventory instant (i.e., for each order period) with a fixed and known probability of , but now assuming a periodic review as follows: We then define the minimum order size, , as Consequently, constraint (4) can be rewritten as follows:

Finally, the reorder point is set to ensure that, for each time an order is not submitted (inventory level larger than ), the inventory level is large enough to fill the demand until the next order arrives in time units, with a probability or service level of :

Finally, substituting (7) into (6), the inventory capacity constraint is

According to the previous inventory control assumption, we describe the proposed Inventory-Location Model with Stochastic Constraints on Inventory Capacity under Periodic Review (ILM-SCC-PR) as a stochastic nonlinear mixed-integer programming model (SNL-MIP). Based on a periodic inventory control policy, the safety stock to be included in the objective function is the average inventory level just before an order arrives at the warehouse:

Additionally, inventory and ordering costs related to order size or inventory cycle are evaluated in terms of the minimum order size as a decision variable, as in the well-known EOQ model: The variables and parameters considered in the mathematical formulation are as follows.

Parameters: Number of available sites to install warehouses : Number of customers that must be served by the installed warehouses : Fixed location cost of a warehouse on site ($/day) : Transportation unit cost from the warehouse on site to customer ($/unit) : Fixed ordering cost at site ($/order): Holding cost per time unit at site ($/day): Mean demand of customer per day: Standard deviation of the demand of customer per day: Capacity at warehouse : Maximum order size at warehouse : Inventory check period at warehouse in days: Average leadtime at warehouse in days: Value of the Standard Normal Distribution, which accumulates a probability of : Value of the Standard Normal Distribution, which accumulates a probability of .

Variables : Binary variable. It is equal to 1 if a warehouse is installed on site and 0 otherwise: Binary variable. It is equal to 1 if warehouse i serves customer and 0 otherwise: Order size at warehouse . It is greater than 0 if , and 0 otherwise: Total mean demand at warehouse . It is greater than 0 if there exist at least one and 0 otherwise: Total variance of the demand at warehouse . It is greater than 0 if and 0 otherwise.Then, the optimisation problem isMin s.t.:

Equation (11) is the total system cost. The first term is the fixed setup and operating cost when opening warehouses. The second term is the daily transport cost between warehouse and customers. The third term contains fixed order and inventory costs related to warehouse order size. The fourth term represents the storage cost associated with safety stock at each warehouse. Equation (12) ensures that retailers are served by only one warehouse. Inequality (13) ensures that customers are only assigned to installed warehouses (). Inequality (14) ensures that the inventory capacity of each warehouse is respected at least with a probability . Inequality (15) ensures that the order size is below the maximum order size allowed to warehouse . Equations (16) and (17) compute the mean and variance of the total demand served by each warehouse , respectively. Equations (18) determine the value of . Finally, (19) states integrality () for the variables and . This model is NP hard because it is clearly an extension of the UFLP, which is already NP hard (UFLP can be obtained just by using and ). In addition, the objective function and one constraint are nonlinear, resulting in a model that is very difficult to solve to optimality using classical mathematical programming techniques. Therefore, in this paper, we attempt to solve medium-size instances by mean of two well-known heuristics called Tabu Search (TS) and Particle Swarm Optimisation (PSO).

4. Heuristic Methods

Heuristic methods are a common approach to solve hard combinatorial optimisation problems. Despite the fact that heuristics do not guarantee optimality, the solutions provided by them can be considered as good suboptimal ones. In contrast exact methods guarantee optimality; however, they usually fail when dealing with medium- and large-sized problems. In this paper, two heuristics have been separately considered to solve our DND problem. The first one corresponds to a well-known local search called Tabu Search. The second one is an evolutionary algorithm called Particle Swarm Optimisation. In the next subsections, an overview of these two heuristics is provided.

4.1. Tabu Search

We can describe the TS approach as a local search technique guided by the use of adaptive or flexible memory structures. However, such a general definition fails to show the specificity of TS and could be confused with other types of Greedy Random Adaptive Search Procedure (GRASP) algorithms. The variety of the tools and search principles introduced and described in [24] are such that the TS can be considered as the seed of a general framework for modern heuristic search [25]. TS has been applied to several combinatorial optimisation problems (see, for instance, [2629]). So too other techniques [3032] have been used to solve different variants of the FLP. The TS is essentially a local search algorithm; that is, it needs to “exchange information” with its neighbours. To do that, first, the neighbourhood must be defined. Typically, TS algorithm has only one neighbourhood move that defines a set of possible candidate solutions. This move is used across all executions of the TS algorithm. Because different neighbourhood definitions can lead to different results in this paper, we have implemented two types of neighbourhoods. The first one is defined by a change in the allocation of a customer from one facility to another without any restriction; that is, the new facility could be either open or closed at the allocation moment. The second one is defined by a change in the allocation of a customer from one facility to another in which the new facility was opened previous to the movement 90% of the time; in other words, the probability of moving a customer to a closed facility is equal to 10%. The TS heuristic provides a diversification mechanism that allows it to get out of low-quality neighbourhoods and “jump” to explore new neighbourhoods. The diversification mechanism implemented in this study is a restart method, which re-initialises a current solution without losing the best solution found by the algorithm. As stated above, the restart method is used in case the TS is unable to move out of low-quality neighbourhoods. The TS algorithm requires the following structures to be implemented. (i)solutionVector: a vector with a size equal to the number of customers. For each customer this vector contains the DC to which the customer is allocated. (ii)bestSolutionVector: a vector corresponding to the best solutionVector found during the TS execution. (iii)candidateSolutionVector: a vector resulting from application of the neighbourhood movement to a solutionVector. (iv)candidateList: a list of candidateSolutionVectors. From this list, we obtain the next solutionVector, which corresponds to the best candidateSolution from the list. (v)tabuList: an matrix that contains those neighbourhood movements that are prohibited in the current TS iteration. Additionally, our TS implementation also requires the following parameter list. (i)iterationNumber: Total iteration number to end the algorithm. (ii)diversificationBound: total iteration number to apply diversification criterion (restart method). (iii)tabuListSize: size of tabuList. This means the number of iterations for which a specific movement remains banned. (iv)listOfCandidates: size of the set of candidateList.

The TS algorithm starts with a random solution in which each customer is allocated to a randomly selected DC. When all customers are assigned, TS validates the model constraints. If the constraints are satisfied, the initial solution () is assigned as a best solution (). A list of candidate solutions () is generated after that. They are neighbours of . The size of the set corresponds to the listOfCandidates parameter. Once the set of has been generated, the TS algorithm selects the best candidate solution (), which should not stay on the tabu list. If the actually is in the tabuList, it cannot be selected unless its cost is less than the cost of the found until this moment (aspiration criterion). When a new current solution is selected, the tabuList is updated, and a new iteration starts. The algorithm stops when the number of iterations is equal to the parameter iterationNumber. Below, we can see the general framework of our TS implementation.

4.2. Particle Swarm Optimisation

The PSO algorithm was proposed and developed by Kennedy and Eberhart [3337]. Although the PSO algorithm was initially designed to tackle continuous optimisation problems, during the last decade, it has proven to be a very good alternative for solving combinatorial optimisation problems. In fact, several articles have used this technique to solve complex combinatorial optimisation problems; see, for example, [38, 39]. For a comprehensive analysis of publications concerning PSO approaches, see [40].

Unlike other evolutionary algorithms, PSO does not use the “survival” concept. This is because all particles are kept “alive” throughout the algorithm execution time and at no time is their survival threatened. The algorithm starts when a set of particles (or swarm) is initialised. Each particle , with , is represented by a vector in which the value of each element with corresponds to the warehouse that has been set to a customer . In our algorithm, the initialisation is a random process. For each particle, we know its current position (denoted by ) and its previous best position (denoted by ). We also calculate for each particle the best current neighbour (denoted by ) and the best particle (denoted by ) found so far. The position of each individual particle is adjusted at each iteration by considering its previous best positions, the position of the best neighbour, and the position of the global best solution found so far (see Algorithm 1).

begin
= 0;
Generate initialSolution ;
= ;
while   iterationNumber do
    = Generate Neighbourhood ( );
   find best ( );
   while      isTabu   do
    if   cost of < cost of    then
       aspirationCriterion;
    reeplace ;
   If   cost of < cost of    then
       ;
      iteration without improvements = 0;
   iteration without improvements + 1;
    ;
   update (tabuList);
   check (diversificationCriterion);
   
end

The general frame for our PSO algorithm is presented below.

The most important step in Algorithm 2 is the particle updating. As we have said before, the PSO algorithm was designed for continuous optimisation problems. Therefore, we need to adapt some of its steps to solve combinatorial problems. One important change is in the strategy used to move particle from its position in the iteration to its next position in iteration . To do that, we have considered three possible alternatives: the first one (namely ) corresponds to a change in the allocation of one of the customers (randomly selected) to a new (randomly selected) warehouse. The second alternative () is a change in the allocation of one of the customers (randomly selected) to a new warehouse following the allocation of either the bestNeighbour () or the prior best position (). Finally, the third alternative () corresponds to the same movement but now considering the allocation from the global best solution reached so far . Therefore, the position of particle in the iteration will be the best particle among , , and . Below we present the pseudocode for our particle updating process (see Algorithm 3).

begin
   Initialise set ;
   Calculate ;
   while   maxIteration  not  reached   do
foreach       do
 Calculate ;
 Calculate ;
  Update ;
  Calculate ;
end

begin
   Particles , , ;
   If   inertia         then
LocalMove ;
   If   cognitiveFactor         then
LocalMove ;
    LocalMove ;
   If   globalCognitiveFactor         then
LocalMove ;
    best ;
end

As with local search algorithms, the PSO approach can also get trapped in local optima if the global best and local best positions are equal to the position of the particle over a number of iterations [41]. One distinctive feature of our algorithm is the neighbourhood definition. The idea is that we know that every particle is able to see the best particle at the current iteration. Then, we assume that each particle can see, at least, any other particle for which the distance from to is less than or equal to distance from to . Distance is calculated as follows:

It is clear that in this paper, we do not attempt to develop an original strategy to solve our problem. Instead, we are proposing very simple procedures to obtain solutions for our new model that can be used as a baseline in future research. Moreover, validating some of the main model assumptions is also one of the goals of this work.

5. Experiments and Computational Results

In this section, we present the experiments we have carried out, and we draft some conclusions about our numerical results. As we have noted before in this papaer, our goal is not to state whether the TS or PSO approach is the best way to solve this problem. In fact, we firmly believe that other approaches might be more effective than our approach, and, therefore, applying different strategies to solve our model appears to be a fruitful area for future research into artificial intelligence field. In spite of this, due to the simplicity that both TS and PSO offer, we present these results as a baseline for future studies. We have selected these two algorithms as samples of local search and evolutionary strategies, respectively. Therefore, two of the most important approaches used to solve large and complex combinatorial optimisation problems are covered with these two algorithms.

Moreover, in this section, we present a set of instances that can be used as a benchmark in future studies. The benchmark set consists of two base instances, namely, and , which means that customers are distributed uniformly and in clusters, respectively. Therefore, for instances of type customers and DCs are randomly distributed, whereas, for instances of type , only DCs are distributed over the entire area. Customers are distributed randomly along a specific perimeter around the centre of the cluster. The maximum distance allowed between a customer and the centre of the cluster is a parameter. In our case, the number of clusters was equal to 2 and the ratio was 25[km]. These two base instances ( and ) were used for all the corresponding instances. Customer demand is calculated using a uniform distribution . Transportation costs (TC) correspond to of the Euclidean distance between a customer and a warehouse . Fixed costs (FC) are calculated as the sum of a constant value (in our case 5000) and a factor dependent on the distance between the warehouse and its closest cluster (). With this expression we make those facilities that are close to the “city centre” more expensive. In the same way, the capacity of the warehouse will decrease as the distance to the centre of the cluster becomes small. is calculated as the sum of a base value plus . The order cost (OC) and holding cost (HC) depend on the . The larger the capacity, the smaller the inventory costs. LT is a random number between and . The size of the order is determined as the sum of a constant equal to plus a percentage of the capacity of the warehouse . In our case, this percentage was . FC, HC, OC, Service Levels (), LT, , and for each DC are the same for the two base instances. Figure 1 shows these configurations.

From these two main configurations, we generated a set of 22 instances based on them (11 for each one). To do that, we modified values of FC, HC, and OC in ±25%, and additionally we have varied parameter among values 1, 2 and 3. Then, each instance attempts to evaluate the impact on the network configuration (and on its costs) produced by either increasing or decreasing the parameter values, one at a time. As base instances, we have and and based on them we have a set of instances such as , which means that the FCs from base instance have been reduced to 75% of their original value.

5.1. Computational Results and Discussion

The computational experiments were performed on an Intel Core Duo processor CPU T2700 2.33 GHz with 2 GB of RAM, using the Ubuntu 12.04 operating system. Both the TS and PSO algorithms were implemented in the JAVA programming language using NetBeans IDE. To validate the algorithms and measure their convergence, we used a set of small instances from [42] that have an optimal solution that was obtained by means of an enumerative method. This enumerative algorithm was executed for approximately 4 hours for each instance, whereas the heuristic algorithms took less than 5 seconds to solve each one. Furthermore, they found the optimal solution all 10 times that they were executed over each instance. We also ran a random search algorithm to determine the real improvement provided by our heuristic. On average, savings when using heuristics algorithms were approximately 300% in all instances.

We fixed both the nonstockout probability and the nonviolate capacity constraint probability at 95%. Hence, the values for and are fixed at 1.648. As we mentioned in Section 4 of this paper, we implemented two types of neighbourhood movements for our TS algorithm. Table 1 shows a summary of the results obtained using both movements for all 22 instances.

Column Instance corresponds to the evaluated instance. Columns and correspond to the average value obtained after executing the TS algorithms 10 times using movements and , respectively. In the same way, columns and correspond to the standard deviation of the results. Finally, columns (sec) and (sec) show the average time needed by our TS implementation to find the best solution.

Table 1 shows how, predictably, the average cost increases with the value of period review . Furthermore, the DND is also affected by the change in the value of . In several instances, when increases, the number of open DCs follows it. Figures 2(a) and 2(b) illustrate this situation.

These two situations are produced for the same reason: when the value of increases, the safety stock must be greater to avoid stockout during the period without review. Thus, DCs cannot attend the same number of customers when the safety stock is increased because of their limited capacity. For this reason, the model is forced to open an extra DC to satisfy total customer demand. This relation is especially important because it shows that the value of not only affects the cost of the distribution network but also its design.

We now compare the best solutions obtained by the TS algorithm with the ones obtained by the PSO method. Table 2 shows the results of this comparison.

According to the results the PSO algorithm seems to perform slightly better than the TS. That is more evident when we look at instances where PSO shows a better performance in 9 out of 11 cases. On the other hand, TS is clearly better than PSO when instances are considered. These are quite interesting results because they suggest that we should take into account how customers are distributed to decide among different heuristic techniques. It is clear that further research is needed in this field.

Because we do not have any bound for our instances, we cannot say anything about the quality of our solutions. Another interesting research line would be to find lower bounds for this model to determine how good the solutions provided by the heuristic methods are.

Moreover, when we examine the components of our objective function, we can see how they behave. In Table 3, columns FC (%), TC (%), Inv (%), and SS (%) show the change (on average) of Location, Allocation, Inventory, and Safety Stock cost as a percentage of the base instance .

Another interesting finding is that individual subcosts (FC, TC, Inventory, and SS) do not increase to the same extent that parameters do. In other words, when, for example, FC is increased (decreased) by 25%, the portion of the total cost corresponding to FC does not increase by 25% when compared with its corresponding base instance. This can be associated with the ability of the model to seek other location/allocation alternatives to keep the total cost as low as possible.

Regarding our algorithms, they showed good performance in terms of time and convergence. As we mentioned before, we cannot compare our results with other works because this is a very new model. Figure 3 shows the convergence of the TS algorithm for a specific instance. Here, we can see how the diversification method was invoked several times (peaks in the solid line) during the execution after finding a (presumably) local optima.

Despite the fact that restart method proved to be effective in providing some diversification to our algorithm (and consequently allowing us to move away from local optima), other diversification strategies can be implemented to exploit information from previous iterations and, consequently, provide the algorithm a higher exploration level. With regard to the aspiration criterion, this was invoked 30 times on average for each execution with a lower bound of 4 and an upper bound of 71. These values show how the frequency-based memory of the TS works.

6. Conclusions and Future Work

In this paper, we solve a novel inventory-location model that integrates inventory decisions at the strategic level. As a first contribution, our model considers a periodic inventory review policy unlike previous models that assume continuous inventory review policies. We have also generated a set of 22 medium-sized instances for both uniformly and 2-clustered distributed customers. These medium-sized instances consider 50 DCs and 100 customers. This set of instances was solved using both TS and PSO heuristics independently which yielded an approximation to the optimal solution for each instance. Despite the simplicity of both approaches, good solutions (w.r.t. randomly generated ones) were found in an acceptable amount of time. The results obtained confirm the importance of the value of , which affects both the cost and configuration of the distribution network. A sensitivity analysis over the key parameters of the model was performed. The numerical results showed the effect on the network produced by changes in both inventory cost (HC and OC) and TC. As future work it is possible to integrate other tactical elements such as routing or layout decisions to our ILM, to make them even more representative of reality and therefore improve the quality of the solutions. Examples of these elements are the transportation decisions or the DC layout decisions. Based on the obtained results, we can state (preliminary) that evolutionary algorithms and particularly those based on swarm intelligence such as PSO should be implemented to obtain better solutions for those instances that involve clusters. Local search strategies should be considered for those instances with uniformly distributed customers. However, further investigation is needed in this area.

Moreover, some matheuristics approaches (ones that mix mathematical programming and metaheuristics) seem to be another interesting research area to explore. Finally, it would also be interesting to calculate some lower bounds for this model in order to measure how far from the optimal solution our approximations are.

Acknowledgment

The authors are indebted to anonymous referee comments and the editor for their valuable comments through the review process. The paper remarkably improved through their recommendations; yet, the authors are responsible for any remaining errors. Fernando Paredes is supported by FONDECYT-Chile Grant 1130455.