Abstract

When an enterprise has thousands of varieties in its inventory, the use of a single management method could not be a feasible approach. A better way to manage this problem would be to categorise inventory items into several clusters according to inventory decisions and to use different management methods for managing different clusters. The present study applies DPSO (dynamic particle swarm optimisation) to a problem of clustering of inventory items. Without the requirement of prior inventory knowledge, inventory items are automatically clustered into near optimal clustering number. The obtained clustering results should satisfy the inventory objective equation, which consists of different objectives such as total cost, backorder rate, demand relevance, and inventory turnover rate. This study integrates the above four objectives into a multiobjective equation, and inputs the actual inventory items of the enterprise into DPSO. In comparison with other clustering methods, the proposed method can consider different objectives and obtain an overall better solution to obtain better convergence results and inventory decisions.

1. Introduction

Stationery discount stores offer a large variety of products in small quantities and have low average profit margins. Moreover, the uncertainty of customer demand causes is a challenge to the stores because product shortages often occur, and backorders are needed [1]. In response to this problem, the stores need to consider relevant inventory decision objectives and develop DPSO (dynamic particle swarm optimisation) for multiobjective planning based on customer demands. The four objectives in inventory decision are total cost, backorder rate, demand relevance, and inventory turnover rate. The understanding of customer demand is a process of integrating dynamic experience, value, situational information, and professional judgment to create and incorporate the information exchange between customers and the stores. Therefore, inventory management decision-making process is important in increasing the store profits. This study applied DPSO in multiobjective planning, and the algorithm considers the disturbance mechanism and tournament selection [2, 3] to improve PSO and assess the multiobjective solution by maximum spread (MS) [4, 5].

According to literature review, few studies simultaneously consider the above four objectives and apply DPSO in the analysis of the customer demand-oriented inventory decision process. By clustering the products and assessing the inventory performances, this study expects to help stationery discount stores to properly control inventory in the face of various customer demands based on inventory knowledge decision making procedures. Moreover, it is expected that sufficient products can be provided at low prices [141].

2. Literature Review

This section reviews literature on inventory management decisions and multiobjective planning algorithm to analyse the importance of the proposed inventory decision-making process.

2.1. Inventory Management Decision

The activity-based costing (ABC) inventory management classification mechanism is the theoretical basis for inventory classification. Ramanathan (2006) [34] argued that the ABC analysis of inventory classification is one of the most widely used approachs of enterprises. Moreover, it is important to consider multicriteria inventory classification and determine weighted linear optimisation. Zhou and Fan (2007) [42] suggested that although the model proposed by Ramanathan (2006) [34] has many advantages, it may inappropriately categorise an item of insignificant criterion but high value as an item in A category. Thus, they proposed an extended R model to provide a more reasonable decision-making method. Hadi-Vencheh (2010) [18] proposed an extended NG-model multicriteria ABC analysis. It is a nonlinear planning model that integrates multicriteria ABC classification, while maintaining the influence of the weights. Chen (2012) [10] suggested that ABC inventory classification is one of the most popular techniques for organisations to efficiently plan and control thousands of inventory items. The proposed approach improves some previous methods by providing a more reasonable and comprehensive performance index and a unique inventory classification without any subjectivity.

In response to the diverse customer demands, different inventory management decision-making concepts have been proposed, such as VMI (vendor management inventory) and RMI (retailer-managed inventory) that can accurately predict inventory and increase inventory turnover rates. Mishra and Raghunathan (2004) [33] pointed out that a successful VMI strategy is the main developmental trend of a coordination strategy and information sharing of the current supply chain management. Moreover, the supply chain management performance can be enhanced by inventory management and relevant cost transfers. They discussed the differences in profitability and compared business performances between VMI and RMI strategies in the retail industry, using the business models of one retailer and two competing brands. They found that when the retailer is faced with two competitors, the business performance and profitability of using the VMI strategy are superior to those using the RMI strategy. Kuk (2004) [24] argued that when VMI strategy is implemented in the electronic industry, the information technology implementation barrier and mutual trust between vendor and retailer are the main factors affecting the success of the VMI strategy. Dong and Xu (2002) [16] suggested that inventory management proprietary rights belong to the vendor. Moreover, when applying the VMI inventory management strategy, the suppliers bear the inventory costs originally borne by the retailer. According to Yao et al. (2007) [43], the inventory management proprietary rights are held by the retailer; hence, the retailer should assume inventory-related costs. The upper, middle, and lower suppliers of the supply chain management share information and communicate, while the retailers can reduce inventory purchase costs. The suppliers can change the inventory level and minimise inventory costs to enhance the business performance of the supply chain as a whole. Kannan et al. (2013) [22] analysed two cases concerning the benefits that a VMI agreement could bring for the one-supplier multiple-customer case: a supply chain managed in a traditional manner and VMI when both the vendor and the customers belong to the same organisation. The analysis is based on the economic ordering quantity (EOQ) formula and its related total cost. The novelty is captured by evaluating one vendor, multiple buyers, and multiple product situations.

Backorder rates and demand relevance have shown the important objectives of the enterprise when satisfying a set level of customer service, and many scholars have applied various methods to meet such customer service levels. Axsäter (2003) [6] pointed out that, through one-way horizontal allocation and alternative questioning, supply chain management performance is measured by service level. Lee et al. (2007) [25] studied the effective horizontal allocation of the supply chain to promote the overall service level for different groups of customers. Hsu and Tsou (2010) [1] argued that inventory management is very important to an enterprise, and its purpose is to use the least cost to maintain a high standard of service, thus, reducing the likelihood of backorders and meet customers’ demands in products. How to trade off such conflicting objectives is a challenge of multiobjective inventory control. They thus extended the three-objective inventory control model under backorder, as proposed by Agrell (1995), to the case of sales loss by adding local search and the hybrid multiobjective particle group optimisation of clustering mechanism to solve the problem of inventory control under different models. The results were compared with the robust Plato evolutionary algorithm. It was found that the nondominated solution of the hybrid multiobjective PSO optimisation is significantly better than the robust Plato evolutionary algorithm, under three performance measurement indicators. Finally, the solutions of different inventory models, in the cases of backorder and sales loss, were compared. For sales loss, enterprises are particularly concerned about sales loss due to backorders; thus they pay special attention to inventory management by keeping track of the sales condition, ordering appropriate inventory, and reducing inventory costs.

2.2. Multiobjective Planning Algorithm

Multiobjective planning is to determine a better solution from multiple objectives. Many scholars have applied algorithms in the solution of multiobjective problems to obtain more efficient mechanisms. Liu et al. (2007) [29] proposed a method containing a synchronous local search and a new particle updating method. The synchronous local search can directly implement local fine-tuning to enhance the global search capability of PSO, thus, solving premature convergence and maintaining solution diversity for a fuzzy global-best solution. Hsieh et al. (2007) [19] proposed a method to delete excessive similar particles in the external scratchpad through cluster operation. The circle centre dominated method was adopted to determine the global optimal solution. The concept of gas diffusion allows each particle to have a footing to expand externally. The specific method depends on the currently stored nondominated solution to calculate an imaginary circle centre. The coordinates of the circle centre are composed of the maximum values of the target functions in the current storage space. Krichen et al. (2012) [23] applied a number of linear target functions of multiobjective linear optimisation problems (MOLPs) to optimise a convex polyhedron. They proposed a new method to generate a set of effective result MOLPs space, which is based on the adjacent concept of highly efficient extreme points. The mining method can generate highly efficient extreme points and viewpoints of maximum efficiency. Moreover, a highly efficient combination of adjacent extreme points of defined borders was proposed.

For inventory multiobjective decision-making, Wang and Shu (2007) [41] developed a fuzzy decision making model to assess the overall performance of new product supply chain design, thus minimising the total supply chain costs and maximising the value. A genetic algorithm was used to obtain the optimal solution. Özgen et al. (2008) [4] integrated the analytic hierarchy process (AHP) method and multiobjective possibility planning and developed an application model of supplier selection and order allocation problems, that covered qualitative and quantitative factors. That model aimed to minimise the total purchase cost and total rejections and maximise the weighted supplier points. Mansouri et al. (2012) [31] proposed the optimisation of multiple objectives to establish order supply chain management. They reviewed build-to-order supply chains as the decision support tool of multiobjective optimisation key technology. From the perspective of multiobjective optimisation of the existing optimisation model, their method attempted to develop relevant decision support by considering the different interests of each supply chain, without involving manufacturers. Hence, service-based objectives were developed to effectively satisfy the requirements of the objectives. Bouchery et al. (2012) [7] explored the multiobjective problems. Using the amended EOQ model, they studied sustainable order quantity and analysed the characteristics of a highly efficient solution set (Pareto optimal solution). These results were used to provide regulatory policies of different opinions in the control of carbon emissions. They provided an interactive process to allow decision makers to quickly identify the optimal solution of the solution set. The proposed interactive process is a new combination of multicriteria decision analysis technique.

3. Research Method

3.1. Description of Single-Objective Function

This paper considers a number of commonly used inventory management objectives, including total cost, backorder rate, demand relevance, and inventory turnover rate, which are elaborated in detail, as follows [9, 14, 26, 34].

3.1.1. Total Cost Target Equation

Symbols:: Total cost: Item set-up cost: Cluster j order cycle: Demand in unit time of item : Holding cost of each unit of inventory in unit time of item : Item sets contained in cluster : Maximum number of clusters: Current position of particle in dimension : Integral value in the range from 2 to : Major purchase cost of issuing order to supplier : Supplier sets contained in cluster .

As updating of the breakpoint needs to comply with , this paper uses another method of improvement, namely, if , then . The total cost consists of ordering and item holding costs. The cost computation equation of direct clustering, as proposed by Chakravarty (1985) [9], is used. As the number of suppliers is considerably high, and orders are issued according to the suppliers, the items in clusters are provided by a number of different suppliers. Therefore, the orders are sent out separately. The cost target equation should consider the situation of a cluster consisting of multiple suppliers, as follows [9, 14, 26, 34]:

For an algorithm with cost as the target equation, it only needs to input the clustering results into (2) when estimating the particle fitness value. The purpose is to minimise additional ordering and item holding costs.

3.1.2. Backorder Rate

The case in this study is a stationery discount store, which is characterised by products of small quantity and great variety, varying customer demands, and frequent backorders. The frequent backorders lower the customer demand because customers may have lost patience and turned to other stores. This is consistent with the practical situation [9, 14, 26, 34].

In the model construction process of this study, a known and fixed parameter is added, which represents the proportion of decrease in demand during the backorder period, and . The rate of change in demand during the backorder period (i.e., the demand in unit time) is defined, where, is the unit time demand rate during the backorder period, is the fixed demand rate of item , , is the time of depletion of item , and is out-of-stock time of item i. A decreasing function relating to demand and time is developed.

Backorders occur in the period of , and the demand rate in period is the decreasing function of out-of-stock time. This paper defines and , which is the change in demand (i.e., the slope). Hence, by the integration of , we can obtain the function of the curve. The value is inputted to obtain the maximum order level, as shown below:

We assume is the out-of-stock function of period and conduct integration of to obtain the area of out-of-stock in period (i.e., the total number of backorders), as shown below:

Hence, we can calculate the average backorder level of Item i () and the order level of Cluster j (), as follows:

3.1.3. Demand Relevance

The target equation depends on the demand relevance of items. For example, when purchasing a mechanical pencil, the refill or rubber is often purchased at the same time. In other words, before applying the algorithm, the demand relevance of items should be understood. This paper employs the relevant items of the stationery discount store to compute the demand relevance [9, 14, 26, 34].

Below are the steps for the computation of item relevance:(1)Calculate the demand proportion of each final product.(2)According to the bill of material of each final product, calculate the number of various items to produce a final product.(3)Multiply the product demand proportion with the number of items obtained in Step .(4)Sum up the quantity of the same items used in each type of item, and calculate the total demand of each item.(5)Convert the total demand of items into the demand relevance data of a two-dimensional matrix.(6)Standardise the two-dimensional demand relevance data to render all values in the range of 0 to 1, and the standardisation equation is as follows: where, : the demand relevance of item and item after standardisation; : the demand relevance of item and item before standardisation; : maximum demand relevance before standardisation; : minimum demand relevance before standardisation.

By following the above steps, we can calculate the demand relevance of the items, which is relative between the items, such as item against item and item against item . Hence, we only need to consider one situation in the computation of total relevance. When applying the algorithm, items classified in the same cluster can be sorted by item number in an ascending order. The demand relevance of small-numbered items is summed against big-numbered items, so that the total demand relevance of each cluster can be obtained. Finally, the total demand relevance of each cluster is summed to obtain the total demand relevance of the clustering results. The total demand relevance is the fitness value of the particle, and the objective of the target equation is to maximise the total relevance of each cluster.

3.1.4. Inventory Turnover Rate

The inventory turnover rate refers to the rate of turnovers of inventory items in a certain period. In general, enterprises use the inventory turnover rate as the indictor of inventory management [9, 14, 26, 34], as shown in (9) where, : demand for item , : inventory of item , and : volume of inventory items.

Average inventory volume of cluster j = .

Inventory volume within the cycle =

Equation (9) is the computation equation of the target equation inventory turnover rate, where the objective is to maximise the inventory turnover rate of the clustering results.

3.2. Descriptions of Standardisation of Multiobjective Target Equation

In this section, the single-objective target equations are combined using the objective planning method to obtain the optimal clustering results that can satisfy various target equations. Since the units of the objectives vary, the objectives are standardised using a simple standardisation method to convert the target value of each objective into a value in the range of 0 to 1. The standardisation method assesses the effect measure of the clustered target values and unclustered target values. The clustered target value is the target value calculated by inputting the clustered items into each single-objective target equation. The unclustered target value is the target value calculated by inputting the items classified as a cluster into the target equation.

The dummy standard sequence consists of the optimal of the assessment result of each attribute. This optimal result is determined by the objective of maximisation or minimisation of the attribute. The effect measure represents the relationship of each element of sequence corresponding to attribute and the dummy standard sequence. The calculation of effect measure can distinguish upper effect and lower effect measures, according to the maximisation or minimisation target of the attribute, as illustrated below [11].

The upper effect measure: applicable to attributes with the objective of maximisation (i.e., the larger the better), such as inventory turnover rate. Hence, the maximum result of all programs of attribute is used as the element corresponding to the dummy standard sequence. The definition of the upper effect measure is as follows:

The lower effect measure: applicable to attributes with the objective of minimisation (i.e., the smaller the better), such as total cost. Hence, the minimum result of all the programs under attribute is used as the element corresponding to the dummy standard sequence. The lower effect measure is defined, as follows:

As defined above, the value of effect measure is in the range between 0 and 1. The greater the value, the better the effect of program under attribute .

3.3. Pareto Set Method to Solve the Nonlinear Problem of Target Equation

Regarding the nonliner problem in Section 3.1.2, this paper adopts the Pareto set method (Lin et al., 2011) [28], which can be divided into two parts. Steps are the first part. The initial approximation of the Pareto set is to obtain a linear solution to approximate the nonlinear problem for -Pareto prediction in the second part. The approximation method in the first part adopts an appropriate number of line segments in order to approximate the original nonlinear Pareto set until the approximation error is within the acceptable range. Then, the feature points upon approximation are used as the linearisation of the nonlinear constraint in the second part to generate a group of appropriate linear constraints. The details of the method are described below [12, 33, 36].

The nonlinear Pareto set is indifferent to reliability and is regarded as the start of the initial Pareto set. If the jth target function is used as the independent variable, another function value is the dependent variable. First, we calculate the optimal solution of , with as the lower bound of independent variable , as shown in (12)

Next, after computing the optimal solution of another target function , we input the optimal solution into target function as the upper bound of independent variable . Therefore, after computing the lower and upper bounds of the independent variable, we can obtain the Pareto set function using , , as shown below. With the upper and lower bounds as the starting points; proceed to Step :

After the computation of the lower and upper bound functions, as shown in (12) and (13), we then calculate the error between the lower and upper bounds to enter Step .

The block sandwich squeezing method uses the linear function of the upper and lower bounds to squeeze the original function until the error of the lower and upper bounds is smaller than expected. If the Pareto set is regarded as a convex function , in other words, of the restraint method is used as the independent variable and the value of another target function of the nondominated solution as the dependent variable , we can produce a group of linear functions of the upper and lower bounds. The restraint method can only be used in the appropriate range. If there are cut-off points in the range of , the Pareto set can be divided into blocks for discussion, , ; then, the upper bound function can be written as:

The upper bound function is moved downward until the upper bound function and nonlinear function intersect at point . The process of finding is a single-objective optimisation problem, as shown below:

If is the optimal point to produce the optimal solution to target (15), we can present the lower bound function as follows:

As the range of lower bound function can be obtained by considering the following segment and previous segment , the intersection point of and can be computed, using to obtain the upper bound of and the interaction point of and to obtain the lower bound of . Given that the cut-off point is appropriate, we can approximate the original function by the lower bound and upper bound linear functions, as follows:

Check whether the final error, as obtained in Step , is smaller than the one acceptable to the decision maker. If it is smaller, the first part is complete, and it enters Step ; if not, enter Step .

Using in the calculation of the lower bound function in Step as the cut-off point, increase the number of cut-off points of the initial Pareto set in order to use more line segments for approximation. According to the modified Sandwich squeezing method, as proposed by Tan et al. (2006) [36], there will be more cut-off points in place of a greater Pareto set curvature. Then, enter Step .

After finding the appropriate linear model of the Pareto set in the target space, a group of corresponding restraint conditions in the design space is determined. In other words, the cut-off point as the “feature point” is used to determine the corresponding solution in the design space and conduct the linearisation of the first order Taylor function of the active restraint conditions. A group of restraint conditions is obtained, then enter Step .

The initial reliability, as determined by the decision-maker, is used to horizontally move the restraint conditions; then, enter Step .

The Pareto set method for linear problems is used to calculate the Pareto set from the initial reliability to the final reliability; then, enter .

The horizontal movement of the linear Pareto set and the movement of the nonlinear Pareto set should have an error, which is caused by the nonlinear degree. The error is checked by the distance between the optimal points of the same single objective in the target space of the two problems; after computation, enter Step .

Check whether the error calculated in Step is acceptable to the decision-maker, if it is acceptable, the implementation of the method is ended; otherwise, enter Step .

In theory, using infinite number of line segments to approximate the nonlinear function can have the characteristics of the function. Hence, error acceptable to the decision maker can be narrowed by half to enter into the first part to restart this method.

3.4. DPSO (Dynamic Particle Group Optimisation)

The DPSO algorithm procedure is as shown below [8, 20, 25].(1)To establish and initialise a few subgroups , is the maximum number of groups.(2)In times of implementation ( is determined by the number of particle groups and dimensions), after calculating the fitness value of the particles of the subgroups, disturb the particles when updating the particle position.(3)In the th implementation, after calculating the particle fitness value of various subgroups, use the modified velocity equation to update the position of each particle by referring to the position of the particles of different subgroups.(4)Consider the changed fitness values of particles of the optimal solution and implement the tournament selection.(5)After the convergence of all particles in a certain subgroup, use the fuzzy c mean method to adjust the cluster centre.

The algorithm procedure is as shown in Figure 1.

The entire particle group set is divided into a number of subgroups according to the number of groups . In the given number of groups, we use the disturbance mechanism in the subgroups to test different combinations of group centres, thereby increasing particle diversity. The second step compares the fitness values generated by the different groups and uses the modified velocity equation to update particle action. The third part uses the fuzzy theory to consider the overall distribution of data to obtain the optimal centre when particles converge.

3.4.1. Disturbance Mechanism

The PSO evolution indicates that particles will move to a known location according to previous experience, which is characterised by fast convergence. Hence, after iteration, many particles tend to be attracted to the cluster in certain areas, resulting in stagnation of late iteration. Although the particles have very good search capabilities in some local areas, they cannot find a better solution outside these areas. In other words, these particles cannot escape the local optimal solution and are unable to determine the global optimal solution. This paper adds a disturbance mechanism in the moving process Tsai et al. (2010) [38] in order to increase the opportunity for the particles to escape the local optimal solution. Disturbance is a mechanism similar to genetic mutation for optimising poorer dimensional value. The design of the disturbance mechanism is a random disturbance within the boundaries but without causing the problem of escaping the spatial boundary. The operational process of the disturbance mechanism for the number of particle groups of and problem dimension at is elaborated as follows.

N is defined as the multiplication of DR (disturbance rate) and . We randomly select dimensions of particles to add noise for disturbance. The selected dimension is between 1 and . The disturbance noise is between 0 and the gap between the boundary and the dimension value. In late iterations, most particles start to concentrate around the optimal position of some regions. To prevent the local search of particles and ensure efficient search of the optimal solution, the disturbance amount is increased. Hence, to satisfy the above-mentioned requirements, the disturbance probability increases with the iteration number in a linear manner. The DR, as set in this paper, linearly increases from 0.06 to 0.28. The disturbance mechanism will cause the particles to move in the direction of an unknown space in the selected dimension, rather than upon previous experience, guide the particles from falling into a local optimal solution, and encourage particles to move in directions that have not been explored. The disturbance operational virtual codes are as shown below. Random parameter is Gaussian noise (mean value 0.5, standard deviation 0.9), where is a random value between 0 and 1; if , it is a low value disturbance, otherwise, it is a high value disturbance. After determining the direction, the disturbance value is determined by multiplying a random value rand in the range of , where and are the lower bound and upper bound searches of , respectively.

If 0.5ElseEnd.

3.4.2. Tournament Selection

The global optimal solution () of the PSO algorithm has the greatest impact on the movement direction of the entire cluster. It is mentioned above that the main difference of single-objective optimisation and multiobjective optimisation is the solution. In single-objective optimisation, there is only one solution, that is, the global optimal solution; however, in multiobjective optimisation, objectives trade off each other and there are generally more than two optimal solutions. Thus, the selection of the global optimal solution will seriously affect the convergence of the algorithm and solution dispersion. Hence, the global optimal solution of a multiobjective optimisation problem is redefined. The global optimal solution of single-objective optimisation is to select the only optimal solution in the evolutionary process. However, this definition is not applicable in a multiobjective problem. How to select the global optimal solution from the nondominated solution set to provide a guide for the subsequent evolution is an important issue.

By referring to the binary tournament selection (M. Chakraborty and U. K. Chakraborty, 1997) [2], this study proposes the tournament selection of the global optimal solution. Literally, it selects the winner by tournament, as a competitor that wins over all other rivals. The winning competitor is the result determined using the mechanism. In multiobjective optimisation, the global optimal solution assigned to each particle may vary, meaning there are different leaders. The mechanism expects, in the evolution process, to effectively distribute the global optimal solution and move forward in the direction of a less dense solution space, in order to find more nondominated solutions that enhance solution diversity without delaying the convergence velocity of the algorithm.

The tournament mechanism selection is as shown in Figure 2. Each diamond represents a particle; a dotted gray diamond represents the elite particle of an external register; a solid white diamond represents a particle in evolution; and is the distance between the evolving particle and the particle in the external register. The process to define the global optimal solution of particles in evolution is described in the following three steps.(1)Compute Euclidean distance between the particle in evolution and the particle in the external register.(2)Regarding each particle in evolution, from all the distances of the particle and the particles in the external register, randomly select three distances and three particles of optimal experimental effect; the three distances comprise the competition among three competitors.(3)Finally, compare the three distances, the one with the smallest distance corresponds to the particle of the external register; namely, the particle is assigned as a leading direction of evolution for .

is computed as follows, where denotes the th particle in the external register and denote the th particle in evolution:

As shown in Figure 2, in the case of Particle in evolution, if we randomly select three distances from all the distances between the particles of all particles of the external register (D1, D2, and D3), D2 is apparently shorter than the remaining two distances. Therefore, the nondominated solution is assigned as the global optimal solution () of Particle W. As the three distances are randomly selected, each particle in the external register has the opportunity to be assigned as the global optimal solution of the particle in evolution, thus enhancing solution diversity and reducing the local concentration of all particles caused by some particles leading in the external register, where there are too many particles in evolution. In this way, many particles in the local optimal position can be limited, and particle diversity is reduced.

3.5. Multiobjective Solution’s Evaluation Method-MS

The difference between the single-objective optimisation problem and the multiobjective optimisation problem is that the optimal solution to the former problem is a point in the solution space. Therefore, a comparison of the single dimension value of the solution space is needed to distinguish the quality of the solution. However, in the latter case, as the relationship between optimal solutions is not dominating, the solution space has a linear or curvature distribution for multiple groups of solution sets. Hence, algorithm efficiency should be evaluated using multiobjective assessment. The measurement and evaluation standards used in this paper are elaborated as follows.

The difference performance indicator cannot be used to discuss the similarities of the boundary or the Pareto front, but can comprehensively evaluate the similarities between combination uniformity and the optimal Pareto front. Hence, MS is used to evaluate the similarities of the boundary and Pareto front of the algorithm [4, 5], MS evaluation parameters are as follows:

3.5.1. MS = 1

In (19), as MS is the root of a square, it is definitely positive. However, among the MS evaluation parameters, denotes the optimal solution, and denotes the identified solution. The max and min denote the minimum or maximum value of S or in the th dimension, indicating the number of targets to be optimised. Moreover, when the optimal solution and the solution found at the boundary are the same, the sum will be the same. In other words, when MS is 1, the extensibility of the found solution at the boundary will be perfect.

3.5.2. MS < 1

In general, MS is almost below 1, when MS is below 1, it means that the solution distribution in the evaluated dimension covers the value of the Pareto front in this dimension.

3.5.3. MS > 1

As the MS assessment parameter is the root of the square of the assessment parameter of the extensibility in various dimensions, in particular situations, MS can be above 1, suggesting that the solution distribution has not covered the value of the Pareto front in this dimension. In other words, the difference between the distribution of a searched solution and the Pareto front is great.

4. Case Verification

Using DPSO, this study develops a clustering method for inventory items and compares it with other common clustering methods in order to verify the effectiveness of the proposed method. Other common clustering methods include the multiobjective PSO clustering algorithm and the ABC analysis method, where each item is an independent group and all the items are classified as a group (namely, no clustering). The cost target equation calculates the total sum of ordering costs and inventory holding costs. A smaller cost means better clustering results. In Table 1, “DPSO” is the application of DPSO, “PSO” is the multiobjective PSO clustering algorithm, “ABC” is the ABC analysis method, and “G” is the clustering method of independent groups.

4.1. Comparison of Multiobjective Target Value and Single-Objective Target Value

In the case study, this paper refers to nearly three years of monthly inventory data, covering 6938 items and 156 suppliers. The data are consistent with the classification criteria of the ABC analysis method. Table 1 lists the target values obtained using various clustering methods in the case of a different single-objective target equation. As shown in Table 1, the proposed DPSO algorithm is superior to other clustering methods.

If the store uses the clustering method, the annual cost of inventory items is 2,635,849.67 NTD. If the store uses the proposed DPSO method, it can save 326,580.55 NTD annually, with a saving of 10.9%. The comparison with other clustering methods is as shown in Table 2. The computation of the percentage in Table 2 is as shown below:

The results of applying the proposed DPSO method in actual inventory data are as shown in Tables 3 and 4. Although the clustering results using the proposed DPSO method are better than the results of the single-objective methods, the ranking of the multiobjective method is almost second in terms of individual objective comparison. The ranking of the backorder rate is the highest. The percentage difference computation of Tables 3 and 4 is as shown below:

4.2. Comparison of Different Multiobjective Algorithms

Deb et al. (2002) [14] proposed the nondominated sorting genetic algorithm II (NSGA-II), which is elite and definitely maintains the difference mechanism. The offspring generation sized is produced and combined with the parent generation sized N. In the ranking of the nondominated solutions, individuals of better quality are retained to ensure the optimal nondominated solutions of the parent and offspring generations. In addition, crowded tournament selection is applied to ensure the difference between solutions. However, if the number of nondominated solutions is fewer than the overall clusters in the first search, besides the solution close to Parent front, other solutions that approximate the Pareto front will be retained to affect the convergence of the algorithm. This paper compares DPSO, PSO, and NSGA-II to confirm that using a disturbance mechanism and tournament selection to select global optimal solution can achieve better convergence effect.

Each test function experiment is performed 30 times. The experimental test function is the multiobjective function introduced in Section 3.1, the fitness evaluation assessment scale (FEAS) is 20,000, the initial number of cluster sizes (or number of chromosomes) is 120, and the number of new particles (chromosomes) generated in each iteration (movement) is 80. The simulation results are obtained after repeating the Monte Carlo experiments 30 times for each algorithm. Parameter setup of the Pareto archived evolution strategy (PAES) and nondominated sorting harmony search (NSHS) are shown in Table 6. Tables 5 and 6 show the settings of the various algorithm parameters.

Moreover, in order to evaluate and compare the localisation accuracy of the four approaches, we use the normalised localisation error (NLE) proposed by Manjarres et al. (2013) [30]. Table 7 lists the algorithm for each scenario and the average means, minimum and standard deviation of the NLE values associated to the topologies belonging to the approximated Pareto front after 30 Monte Carlo experiments [14, 39].

This paper adopts the solution spread measure (SSM), ratio of nondominated individuals (RNI), and optimizer overhead (OO) to compare the multiobjective solutions, as shown in Table 8 [5, 35].

4.2.1. SSM

As it is desirable to find more Pareto-optimal solutions, it is also desirable to find the ones scattered uniformly over the Pareto frontier in order to provide a variety of compromise solutions to the decision makers. SSM denotes the distribution of the solutions along the Pareto front as shown below: where N is the number of solutions along the Pareto front so there are consecutive distances, di is the distance (in objective space) between each solution, is the arithmetic mean of all di and df and dl, are the Euclidean distances between the extreme solutions and the boundary solutions of the obtained nondominated set. Thus, a low performance measure characterises an algorithm with good distribution capacity.

4.2.2. RNI

This performance metric is defined as the ratio of nondominated individuals (RNI) for a given population , as shown below: where nondom_indiv is the number of nondominated individuals in population X and P is the size of population X. The value means that all the individuals in the population are nondominated and denotes the situation where none of the individuals in the population are nondominated. Since a population size of more than zero is often desired, there is at least one nondominated individual in the population within the range of .

4.2.3. OO

The total number of evaluations and total CPU times may be used to test the algorithm. This is useful in indicating how much time an optimization or simulated evolution process would take in real world and to indicate the amount of program overhead as a result of the optimisation manipulations such as those by evolutionary algorithm operators, as shown in (24): where is the total time taken and is the time taken for pure function evaluations. Thus, a value of zero indicates that an algorithm is efficient and does not have any overhead. However, this is an ideal case and is not practically reachable.

Kruskal-Wallis testing (KW test) is an ANOVA test method that uses levels to test whether the above independent population allocations are the same. The statistics are as shown in below [32]:

In (25), is the total number of samples, is the number of sample groups, is the number of samples in the th group, , and is the addition of the levels of samples in ith group. When is true, statistic K is the chi-square distribution of the degree of freedom . If , then is rejected conversely, and is accepted. In this paper, K is ; therefore, the 4 kinds of algorithms have significantly different allocations.

Regarding the mutually nondominated solutions to the external registers by DPSO and PSO, whether each newly found solution can be included in the external register is determined by the relationship between the solution and the nondominated solution of the external register. If the external register size is not limited, there will be four possibilities [12], as described below.(1)For the first generation of evolution, there is no particle in the external register; thus, the newly found solution can be directly input into the external register.(2)If the newly found solution is dominated by the nondominated solution of the external register, the new solution will be deleted and the external register remains the same.(3)If the newly found solution and the nondominated solution of the external register are in a nondominated relationship, the new solution will be included in the external register.(4)If the newly found solution dominates the solution in the external register, then the dominated solution in the external register will be deleted and the new solution will be included in the external register.

The above four situations describe the pairwise comparison of newly found solutions and the solutions in the external registers. As the iterations continue, there will be increasing numbers of solutions in the external registers; thus, there will be more than one newly found solution. Therefore, in the case of a multiple-multiple particle relationship, they will be compared in pairs, and the external register will be updated accordingly.

The MS convergence curve suggests that the convergence of three algorithms is similar. However, the convergence of the proposed algorithm is superior to the remaining two algorithms, as shown in Figure 3.

Regarding the comparison of the disturbance mechanism, as described in Section 3.4.1, when the DR mechanism’s N is 1.7, MS can have better convergence effects, as shown in Figure 4.

For the comparison of competing items in tournament selection, each experiment is performed 30 times. Figure 5 shows the comparison of the competition items of tournament selection in the case of candidates 2 to 6. As seen, better convergence effect can be achieved when the number of race candidates () is 3.

5. Conclusions

This study established customer demand-oriented DPSO, classified products using different classification methods and evaluated the different inventory target performances. For enterprises with products of small quality and great variety, such as stationery discount stores, they can make appropriate inventory decisions in response to different customer demands using the proposed method. According to the research findings, although the clustering results of the proposed DPSO algorithm are superior to the results of the single-objective method, the ranking of the different results using the multiobjective method, as compared to individual objectives, is second, as shown in Table 4. Figures 35 compare DPSO, PSO, and NSGA-II, and show that the selection of the global optimal solution using a disturbance mechanism and tournament selection (DPSO) can have a better convergence effect. In the future, the proposed inventory decision processes may be applied in inventory classification decision making processes for similar types of business.

Conflict of Interests

The authors declare that there is no conflict of interests regarding the publication of this paper.