Abstract

Perishable products, which include medical and pharmaceutical items as well as food products, are quite common in commerce and industries. Developing efficient network designs for storage and distribution of perishable products plays a prominent role in the cost and quality of these products. This paper aims to investigate and analyze the impact of applying an integrated approach for network design of perishable products. For this purpose, the problem has been formulated as a mixed integer nonlinear mathematical model that integrates inventory control and facility location decisions. To solve the integrated model, a memetic algorithm (MA) is developed in this study. For verification of the proposed algorithm, its results are compared with the results of an adapted Lagrangian relaxation heuristic algorithm from the literature. Moreover, sensitivity analysis of the main parameters of the model is conducted to compare the results of the integrated approach with a decoupled method. The results show that as the products become more perishable, application of an integrated method becomes more reasonable in comparison with the decoupled one.

1. Introduction

Perishable products are very common in industries, commerce, and our daily life [1]. Major categories of perishable products include food products, medicines, pharmaceutical items, and many other plants and industrial goods. These products are only usable during their lifetime; when their lifetime is over, they must be discarded [2]. Nahmias [3] has classified perishable products based on their lifetime into two groups: (1) fixed-lifetime items that have a predetermined expiry date and (2) random-lifetime items for which there is no specified expiry date.

The most challenging nature of perishable products is their limited lifetimes that must be considered when deciding on the inventory control policies of these products [46]. However, traditional distribution network design models considered that products can be stored indefinitely in the stocking points (warehouses, distribution centers) of distribution networks. In other words, lifetime is not taken into account in previous distribution network design modeling when deciding on inventory control policy of the network. Recently, perishable inventory control has gained much attention in distribution network design literature. Studies by Leśniewski and Bartoszewicz [7], Su et al. [8], Drezner and Scott [9], Firoozi et al. [10], and Coelho and Laporte [11] are some recent researches in this area.

Distribution network design (DND) problem, which is also known as supply chain network design, is one of the most promising areas in logistics and supply chain management [12, 13]. In a distribution network, items are produced at the manufacturers and shipped to the warehouses and then to the retailers to finally meet the customers’ demand. Distribution network design plays a key role in the cost and quality of every final product [14], and this role becomes more critical when the products are perishable. There are a large number of distribution networks in the real world dealing with distribution and storage of perishable products [5, 6, 15, 16]. However, decisions to be made for designing a network are highly interrelated [17, 18]. An obvious instance of this fact is the mutual effect of facility location and inventory control decisions of a distribution network, so that any change in facility location decisions, that is, the number and location of facilities, may influence the transportation and replenishment cost and, so, affects the optimal inventory policy. On the other hand, inventory policy determines the frequency of orders and, so, has some effects on the transportation cost. Due to the mentioned interrelationship, a huge amount of potential cost saving will be lost if relevant decisions are not optimized simultaneously [19].

In spite of the reasons counted above, meaning (1) existence of large numbers of distribution networks dealing with perishable products, (2) the difference between inventory control modeling of perishable and nonperishable items, and (3) the vast interrelation that exists between network design decisions, still, most of the network design models suffer from incorporation of perishable inventory control into other decisions of a distribution network. The model, which is developed in this study, therefore, aims to investigate the potential benefits derived from the integration of facility location and inventory control decisions of a distribution network that is responsible for distribution of perishable products. Hence, an integrated and a decoupled model are developed for the network design in this study, where the integrated model optimizes network design decisions simultaneously and the decoupled method optimizes decisions sequentially. Sensitivity analysis is conducted to show if the value of integration could be affected by the lifetime of products.

2. Literature Review

A distribution network normally consists of suppliers, retailers, and distribution centers (DCs) or warehouses. In order to take the advantages of risk pooling, each DC receives demands from several retailers and places order to the supplier. The inventory is kept by the DCs to meet the demands of retailers. The objective of distribution network design models is to determine the optimal number and location of DCs, allocation of retailers to DCs, and the ordering cycle and frequency of orders of DCs so that the total cost is minimized [20].

Daskin et al. [21] introduced one of the most well-known inventory-location models known as the location model with risk pooling (LMRP). The model integrated inventory and safety stock decisions with uncapacitated facility location model (UFLP). A Lagrangian relaxation heuristic was developed to solve the model. It was assumed in this model that the mean-to-variance ratios were identical for all retailers. In addition to that, the model assumed an identical lead time between supplier and DCs. The LMRP model was also studied by Shen et al. [22], but using a set partitioning approach for solving the model.

LMRP became the basis of many consecutive network design models. Several capacitated versions of LMRP were developed by Miranda and Garrido [23], Ozsen et al. [17], and Miranda and Garrido [24]. Shen [25] extended LMRP by considering multiple products for the model. Shu et al. [26] removed the assumption of identical mean-to-variance ratio for demands of retailers.

Sourirajan et al. [27, 28] developed the LMRP by removing the assumption of identical lead times between supplier and distribution centers (DCs). Qi and Shen [29], Shavandi and Bozorgi [30], and Atamtürk et al. [31] studied the effects of uncertainty on network design decisions. Gebennini et al. [32] developed a dynamic version of LMRP, and Melo et al. [33] studied the redesigning of a distribution network.

Despite the existence of a large number of distribution networks that are dealing with the distribution of perishable products, integration of perishable inventory models with other network design decisions has not been considered in the distribution network design studies. Therefore, the aim of this study is to evaluate the effects of integrating network design decisions of a distribution network that is dealing with perishable products. The results of this study also help to answer the question as to whether the value of integration is affected by the length of the lifetime of products.

3. Problem Definition and Modeling

The distribution network considered in this study consists of one supplier, a set of retailers, and a set of distribution centers. In order to take the advantages of risk pooling, distribution centers order products from the supplier and store the inventory of products to satisfy the stochastic demands of retailers. In other words, the inventories of the retailers, which are assigned to a DC, are aggregated in that DC. This way results in a decrease in safety stock inventory of the network [34]. The products are perishable and have a limited lifetime. The objective is to estimate the possible cost saving that could be achieved by applying an integrated approach for network design instead of a decoupled approach. For this purpose, an integrated and a decoupled approach are developed and are compared with each other. The decoupled approach first determines the number and location of DCs and allocation of retailer to DCs, and then finds the optimal inventory policy of the DCs. However, the integrated approach optimizes these decisions simultaneously. The remainder of this section describes the integrated and the decoupled approaches. Moreover, the notation used to model the problem is listed at the end of the paper.

Effect of Lifetime on the Inventory Policy. Traditional EOQ inventory models compute the ordering cycle by the formula , where is the order quantity and is the total mean demand. However, if products have a lifetime less than the ordering cycle, this formula cannot be used because, in that case, some products may meet their lifetime, while they are still keeping in the DCs. To prevent this situation the ordering cycle is required to be restricted in such a way that it does not exceed the products lifetime. So, if products lifetime starts as soon as they left the supplier to the DCs, then when products are delivered to DCs, they have lost a part of their lifetime equal to supplier-DC lead time. On the other hand, according to Figure 1, that shows the profile of inventory versus time in EOQ policy, the longest time that a product stays in a DC (or warehouse) equals the ordering cycle plus a time shown by , where is the period of time that takes for safety stock to be totally replaced by fresh inventory. The value of is computed by . Therefore, we can write On the left-hand side of inequality (1), the first term equals the ordering cycle, and the second term is . The terms appearing on the right-hand side of this inequality are, respectively, lifetime and lead time. Inequality (1) can be rewritten as follows: Constraint 2, that restricts the order quantity, will be considered in both integrated and decoupled approaches.

3.1. Cost Components

The following section describes the cost components that occur in the network design, comprising holding inventory and safety stock cost, ordering cost, transportation cost, and fixed installation cost of DCs.

Inventory Holding Cost. The total cost of holding inventory in a DC is determined by (3), where is the unit inventory holding cost of and the terms and , respectively, calculates the average working inventory and safety stock of a DC:

Ordering Cost. The annual cost of placing orders by to the supplier is computed by where determined the total orders placed by in a year.

Transportation Cost. The total fixed and variable transportation cost from the supplier to DCs and from DCs to retailers is computed by (5). The first term in this equation is the fixed transportation cost and the second term is the variable transportation cost:

Fixed Setup Cost. The annual setup cost of is calculated by where is a binary variable that is equal to 1 if is established; otherwise, it is equal to 0.

3.2. Integrated Approach

The integrated model optimizes location allocation and inventory decisions simultaneously. The objective function and constraints of this model are provided in the following: Since the integrated model optimizes the facility location and inventory decisions simultaneously, the mean and variance of demand of a DC are not known in advance. Therefore, in the integrated model, mean and variance of demand of are, respectively, written in the form of and , where and are, respectively, mean and variance of demand of . Then, binary variables of determine whether should be assigned to or not. Moreover, in this model the first term is the holding inventory cost. The second term is the fixed ordering and shipment cost. The third term is the DC setup cost, and the last term is the variable transportation cost. Constraint set (8) specifies that each retailer can only be assigned to one DC. Constraint set (9) guarantees that retailers are only assigned to open DCs. Constraint set (10) makes sure that products are not kept in a DC for a time longer than their lifetime, and constraint set (11) specifies that , are binary variables and is a nonnegative value.

3.3. Decoupled Approach

The decoupled approach is a two-stage procedure in which the first stage applies the classical UFLP model to determine the configuration of the network taking into account the transportation and DCs’ installation cost. Then the acquired configuration is given to the second stage where the order quantities and ordering cycles of DCs are determined. This two-stage approach is presented in the following and is solved by Lingo 12.0 softwarefirst stage: second stage:

4. Solution Method for Integrated Approach

In this study, a memetic algorithm (MA) is developed to solve the integrated nonlinear mixed model described in Section 3.2. The efficiency of the developed algorithm is measured by comparing its results in terms of the total cost with a Lagrangian relaxation algorithm developed by Firoozi et al. [10]. The following sections describe the proposed memetic algorithm (MA).

4.1. Memetic Algorithm

Memetic algorithm (MA) is a metaheuristic algorithm that is a hybrid of an evolutionary framework (such as genetic algorithm) and local search algorithms [35]. MA has been applied successfully to solve various optimization problems. The memetic algorithm developed in this study is a hybrid of a genetic algorithm (GA) and a local research. GA is a stochastic metaheuristic that is inspired by the principles of genetics and natural selection [36]. It initially generates a population of chromosomes each representing a possible solution to the problem. Each chromosome consists of a number of genes that encode representations of a part of the solution. A number of chromosomes from the current population (called parents) are selected based on some selection rules and undergo mutation and crossover to make offspring. Other than offspring produced by crossover and mutation, elitism strategy selects the best fitted chromosomes in terms of the fitness function to survive in the next generation (new population). This strategy is to protect the search from losing the best found solutions so far. The combination of crossover, mutation, and elitism improves the population over generations until the fittest member of the population represents the optimal or a near optimal solution [28]. Size of the population remains constant over the generations. The following subsection describes the structure of chromosomes and the way of producing them.

4.1.1. Chromosome Representation

In this paper, an integer vector encoding scheme is applied for chromosome representation. This encoding is similar to the one considered by Diabat et al. [37] in which the length of a chromosome equals the number of retailers. The th gene of the chromosome shows which DC supplies the th retailer. Figure 2 displays a possible chromosome for a distribution network consisting of 5 potential distribution centers and 6 retailers. According to this figure the first and second retailers are supplied by distribution center number five; the third, fifth, and the last retailers are supplied by distribution center number four; and the retailer number four is supplied by distribution center number one.

A very important advantage of this kind of encoding is that it always ensures that the single sourcing assignment, that is, constraint (8), is satisfied. More importantly the length of chromosome (number of genes) is equal to the number of retailers. Hence, the time complexity of the algorithm will be less in comparison to other encoding that has genes, where is the number of DCs and is the number of retailers.

The only concern about this encoding is that the produced chromosome might be infeasible with regard to constraint set (10), meaning there is at least one open distribution center in a generated solution (chromosome) with an order quantity greater than the right-hand side of constraint set (10). To settle this problem, the algorithm first obtains the value of order quantity, , for all DCs that are open in a given chromosome. This is done by taking the derivative of objective function (7) with respect to and solving it for , where is the order quantity of : If the order quantity of obtained by (14) satisfies constraint set (10), then this is considered as the optimal order quantity for . However, if there is at least one DC whose order quantity violates constraint (10), the condition would be different. In that case, since objective function (7) is convex, the optimal value of will occur at the border of the interval. So, the optimal order quantity is obtained by setting constraint (10) equal to zero, as shown in the following: Another issue that arises here is that (15) may result in a negative order quantity for a DC. In such a case, the infeasible chromosome would be ignored, and the algorithm keeps running until the desired number of chromosomes that satisfy the nonnegativity condition of the order quantity for all DCs is generated. These two feasibility conditions are checked for every new chromosome generated during the search process of the algorithm.

4.1.2. Elitism

During the search process, the algorithm always keeps the best found solutions and stores them into a so-called elite population. The number of individuals in the elite population is one of the input parameters that must be tuned. Each time that an iteration is executed the elite population is updated.

4.1.3. Crossover and Mutation Operators

Crossover and mutation operators protect the search from being trapped into local optimal solutions [38]. In the crossover, two parents are combined and two offspring are produced. This operator chooses a similar position at random along the two parents’ chromosomes and swaps the portions located after the positions. A sample for this operator is displayed in Figure 3. In this algorithm, parents for crossover operation are selected by tournament selection method. This method has a parameter of tournament size. A tournament size of means “” chromosomes are randomly selected from the current population, and the best of them in terms of the cost function is selected as one of the parents required for crossover operation. In contrast to crossover, in mutation only one parent is required and one offspring is generated. This operator chooses a number of genes at random along the chromosome and exchanges the value of selected genes for another feasible value (a number between 1 and the number of DCs). In this algorithm, parents for mutation are randomly chosen from the elite population. Figure 4 displays how mutation generates a new chromosome.

4.1.4. Improvement Algorithms

A five-step improvement heuristic is embedded into the GA to enhance the search. The improvement heuristic is executed on the elite and the mutation population. The steps of this algorithm are selected randomly for each individual and each step is run on an individual until no more improvement could be achieved. In each step of this algorithm, if a better solution is attained in terms of the total cost, the old solution is replaced with the better one. Otherwise, the previous solution does not change. The algorithm steps are as follows.

Step 1. This step assigns a retailer to other DCs other than the one that is currently assigned to.

Step 2. This step considers two DCs and exchanges their retailers.

Step 3. This step considers one retailer from a DC and one retailer from another DC and then swaps the retailers’ assignment.

Step 4. This step considers a DC and assigns all of its retailers to another DC (either an open or a close DC).

Step 5. This step considers a DC and assigns all of its retailers to other randomly selected DCs (not all retailers to one DC).

4.2. MA Procedure

This section described how MA developed in this paper works. This algorithm randomly generates the initial population. The initial population is considered as the current population for the algorithm. Then the elite population that consists of the best members of the current population is made. The mutation population is generated afterward by randomly selecting individuals from the elite population and mutating the genes of selected individuals. Parents for crossover are selected from the current population by tournament selection method. Then an improvement local search heuristic described in Section 4.1.4 modifies the elite and mutation population, and a new population will be made by three operators (crossover, mutation, and elitism). The current population then is replaced with the new one. The algorithm keeps running until the maximum number of iterations is achieved. The stages performed by GA and MA are displayed in Figure 5.

5. Validation of the Proposed Algorithm

This section conducts numerical experiments to investigate the performance of the developed MA. A total number of 36 test problems are generated by varying the parameters of 15-node and 49-node data sets from Daskin [39]. These test problems are generated by considering three amounts for the inventory cost and fixed ordering cost and two amounts for products’ lifetime. The results obtained by MA are compared with the results of a Lagrangian relaxation algorithm developed by Firoozi et al. [10]. Parameters of the memetic algorithm and Lagrangian relaxation are shown in Tables 1 and 2, respectively. The means and variances of retailers’ demands are selected to be the same as the demand parameters of Daskin [39]. Distances between retailers are calculated using the great circle distance formula, based on the longitude and latitude of retailers’ locations. Fixed installation costs are set to the fixed installation costs, as considered by Daskin [39], but divided by 10. Variable transportation costs are set to 0.4 units of cost and lead time is set to 1 day. Tables 3 and 4 compare the results of the two methods. As the results show, the gap between the results of two methods is zero for all the generated test problems.

6. Results and Discussion

This section is to investigate the average cost saving obtained by applying the integrated approach instead of a decoupled approach. The amount of cost saving or the value of integration is computed by the following formula: To find the average value of integration, sensitivity analysis is conducted on four main parameters of the problem including inventory holding cost, ordering cost, products’ lifetime, and variances of demands. Sensitivity analysis investigates how parameters of the problem influence the value of integration. The base case to perform the tests on is a 49-node data set from Daskin [39]. Four mentioned parameters of this case are designed carefully over a wide range to make a total of 1715 different test problems. These total cases are generated by varying the variance of demand, ordering cost, product lifetime and inventory holding cost. For this purpose, the variances of demands are altered from 0.25 to 1.75 times of their initial values, in steps of 0.25. The ordering cost is changed from 0 to 600 units of cost in steps of 100. The product lifetime is varied from 3 to 7 days, in steps of 1. Finally, the inventory holding cost is changed from 2 to 152 units of cost in steps of 25. For each parameter the average value of integration is calculated and analyzed.

In this section, demand parameters are selected from Daskin [39]. Distances between retailers are calculated using the great circle distance formula based on the longitude and latitude of locations provided by Daskin [39]. Fixed setup costs are selected from Daskin [39] but divided by 10. Total fixed transportation and ordering cost is set to 500 units of cost. Variable transportation cost is set to 50 units of cost. Inventory holding cost is set to 5 units of cost, and lead time is set to 1 day.

Figure 6 shows the average value of integration versus changes in the products lifetime and holding inventory cost. This graph consists of seven curves; each one is corresponding to a different level of holding inventory cost. Each point in this graph is the average of 49 test problems that are rendered by keeping holding inventory cost and product lifetime by a constant amount and varying the other two parameters. It is observed from this figure that as the lifetime of products becomes shorter, that is, as the products become more perishable, the value of integration increases dramatically. Additionally, it is evident from Figure 6 that the value of integration increases as the holding inventory cost increases. The same conclusion can be derived when fixed ordering cost is changing instead of inventory holding cost as shown in Figure 7.

The average cost reduction observed by varying the variances of demands and products’ lifetime are plotted in Figure 8. As this figure shows, the value of integration gets higher as the variances of demand increase, although this rise is not very noticeable. However, it is very evident from the figure that the average cost saving is directly dependent on the products’ lifetime in such a manner that the integration value increases as the products lifetime decreases.

7. Conclusion

The impact of an integrated approach for optimizing network design decisions has been investigated in this study. For this purpose, the network design problem of perishable products was formulated as an integrated and a decoupled model. The integrated model optimized inventory control and location allocation decisions of the network simultaneously. However, the decoupled model optimized location allocation decisions first and inventory control decisions afterward. On key parameters of the problem, sensitivity analysis was performed to obtain the value of the integration. It was evident from the comparison of the results that the average cost saving obtained by integrated approach increased as the lifetime decreased. In other words, the more perishable the product was, the more valuable the integrated approach was. Furthermore, the performance of the developed MA was compared with a Lagrangian relaxation from the literature.

The models that are developed in this study are for fixed-lifetime perishable items, that is, items with known expiry dates, like processed food, dairy products, or many industrial items. For future works, it would be interesting to extend the results of this study to random-lifetime perishable items for which there is no specified expiry date.

Notation

Sets
:Set of retailers
:Set of candidate DC locations.
Indices
:Index for DCs
:Index for retailers.
Input Parameters
:Annual fixed setup cost for
:Transportation cost per unit of product per unit of distance
:Per item transportation cost from the supplier to a DC
:Per shipment transportation cost from supplier to
:Inventory holding cost at per unit of product per year
:Fixed ordering cost per order placed by to the supplier
:Annual mean demand of
:Annual mean demand of
:Variance of annual demand for
:Variance of annual demand for
:Distance between and
:Lead time in terms of year from the supplier to
:Lifetime of product at
:Level of service that has to be achieved at the retailers
:Standard normal deviation such that .
Decision Variables
:Order quantity of
:Binary variable, taking the value 1 if is assigned to and 0 otherwise
:Binary variable, taking the value 1 if is open and 0 otherwise.

Conflict of Interests

The authors declare that there is no conflict of interests regarding the publication of this paper.