Mathematical Problems in Engineering

Volume 2014, Article ID 450367, 15 pages

http://dx.doi.org/10.1155/2014/450367

## A New Approach to Reducing Search Space and Increasing Efficiency in Simulation Optimization Problems via the Fuzzy-DEA-BCC

^{1}Federal University of Itajubá (UNIFEI), Avenida BPS 1303, Caixa Postal 50, 37500-903 Itajubá, MG, Brazil^{2}Sao Paulo State University (UNESP), Avenida Dr. Ariberto Pereira da Cunha 333, 12516-410 Guaratinguetá, SP, Brazil

Received 30 January 2014; Revised 14 April 2014; Accepted 17 April 2014; Published 19 May 2014

Academic Editor: Massimo Scalia

Copyright © 2014 Rafael de Carvalho Miranda et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

#### Abstract

The development of discrete-event simulation software was one of the most successful interfaces in operational research with computation. As a result, research has been focused on the development of new methods and algorithms with the purpose of increasing simulation optimization efficiency and reliability. This study aims to define optimum variation intervals for each decision variable through a proposed approach which combines the data envelopment analysis with the Fuzzy logic (Fuzzy-DEA-BCC), seeking to improve the decision-making units’ distinction in the face of uncertainty. In this study, Taguchi’s orthogonal arrays were used to generate the necessary quantity of DMUs, and the output variables were generated by the simulation. Two study objects were utilized as examples of mono- and multiobjective problems. Results confirmed the reliability and applicability of the proposed method, as it enabled a significant reduction in search space and computational demand when compared to conventional simulation optimization techniques.

#### 1. Introduction

The development of discrete-event simulation (DES) software was one of the greatest successes in bringing the realms of operational research (OR) and computation together, according to Fu [1]. Hillier and Lieberman [2] highlight that simulation is an extremely versatile technique which enables the investigation of practically any type of stochastic system. This versatility has turned simulation into the most commonly used OR technique for stochastic systems.

Nevertheless, the simulation optimization integration grew stronger from the 90s due to the development of commercial software packages which marketed integrated optimization routines, thus making it considerably easier to carry out a decision-making analysis [2–4].

Thus, Azadeh et al. [5] affirm that simulation optimization is one of the most important OR tools that has come about in recent years. Previous methodologies demanded complex alterations which were frequently economically and temporally unviable, especially for problems with a large number of decision variables.

According to Medaglia et al. [6], simulation optimization aims to find the best values for simulation model input parameters in the search of one or more desired outputs. This is generally a slow process that takes a large amount of time and the accomplishment of innumerous experiments.

In spite of the advances in simulation model optimization software, a common criticism brought up is that, when dealing with more than one input variable, the optimization process becomes very slow [7–9]. Furthermore, according to Hillier and Lieberman [2], computational simulation packages may be considered as relatively slow and costly when applied in studies of stochastic and dynamic systems. In such systems where a random behavior is prevalent, there is a tendency towards elevated expenses and allocation of qualified labor and expertise, along with extra time for analysis and programming. All these factors lead to a considerably higher computational demand.

Following this thought, Kleijnen et al. [10] recognize that simulation optimization problems are generally difficult to be resolved and present disadvantages such as the fact that the model outputs are implicit functions and exposed to noise. These authors highlight the difficulties involved in analyzing outputs for stochastic simulation models due to the existing variation in each replication.

Taking all of this into account, this study aims to propose a procedure for the simulation model optimization which reduces search space and enables an increase in efficiency, measured through reduced computational demand by means of efficiency evaluation in the face of uncertainty.

In order to do meet this paper’s premises, the procedure herein proposed makes use of the Taguchi orthogonal arrays [11] to represent the experimental region and to test each scenario and its replications using DES and Fuzzy logic combined with DEA. The method is called Fuzzy-DEA-BCC [12, 13] and incorporated the concept of superefficiency. A further description is offered later on in the paper.

The use of the Taguchi orthogonal arrays [11] as means of representing the experimental region of a simulation optimization program is justified by the reduced number of experiments for such experimental designs, given that they are saturated fractional factorial arrays, thus allowing the analysis of factors with levels, testing all levels for each factor in a balanced manner [14].

The arrays for this method were chosen in function of the number of inputs and outputs to follow the rule which determines that, in order to utilize classic DEA (CCR and BCC) models, the minimum number of decision-making units (DMUs) must be equal to or greater than three times the sum of the total number of input and output variables [12, 15, 16].

According to Cook and Seiford [17], the DEA provides a method which allows an identification of the DMUs which serve as benchmark for the others under analysis, forming an efficiency boundary. In the specific case of the DES, Weng et al. [18] assert that the DEA allows the evaluation of relative efficiency of a group of entities using multiple inputs and outputs without being aware of their relationship.

For this study, Fuzzy-DEA-BCC models were chosen due to the stochastic and nonlinear nature of the DES and the fact that the set of DMUs may generally present different characteristics (number of personnel, machines, throughput, profit, work in progress, leading time, etc.) and tend to have different yields or outputs on different scales. The latter justifies the use of the DEA-BCC model [18].

Finally, in order to rank the scenarios generated by the experimental matrix and reduce search space, the concept of superefficiency was used, as proposed by Andersen and Petersen [19]. Xue and Harker [20] point out that superefficiency is capable of differentiating DMUs, thus permitting a classification in terms of efficiency.

In order to reach the proposed objectives, this paper is divided into 5 sections. Section 2 presents the theoretical base, focusing on the DEA, and Section 3 describes the optimization procedure, integrating the previously presented tools. Section 4 details the application of the new optimization procedure and discusses the results. Finally, Section 5 presents the study conclusions.

#### 2. Data Envelopment Analysis

##### 2.1. Deterministic Data Envelopment Analysis

Classic DEA models were introduced by Charnes et al. [15], which use constant returns of scale and were denominated as CCR models in homage to their creators [12]. These models were then extended by Banker et al. [12] to variable returns of scale, this time dubbed as BCC, again in reference to their originators.

According to Cook and Seiford [17], DEA is a nonparametric methodology which comparatively measures the efficiency of each decision-making unit (DMU). Another fact that deserves highlighting is that, through its use, it is possible to avoid the problems created by incommensurability (different units of measurement) among the elements of the input and output matrices.

In the original model from Charnes et al. [15], the input and output variable weights may be obtained from the Fractional Programming model solution, given by subject to:

With DMU_{0} to DMU being under evaluation, is the relative efficiency of DMU_{0}; and are the input and output data for DMU_{0}; is the index for the DMU, ; is the output index, with ; is the input index, ; is the value of th output for th DMU; is the value of th input for th DMU; is the weight associated with th output; is the weight associated with th input.

It is observed that if , DMU_{0} is going to be efficient when compared to the other units considered in the model, and, if , this DMU is deemed inefficient.

This model (1)-(2) is not linear, thus having multiple solutions; nonetheless, it can be linearized, generating a DEA-CCD model, given by the following equation:

DEA-BCC models, with variable returns of scale, can be expressed by the following equations:

A main difference between DEA-BCC and DEA-CCR models is the addition of (free) variable which indicates variable returns of scale. Banker et al. [12] affirm that a DMU considered efficient in a BCC model will also be considered efficient in the CCR model; the inverse, however, is not necessarily true.

According to Cooper et al. [16], in order to provide a suitable discrimination of the DMUs in traditional DEA models, the following must be verified: number of DMUs ≥ maximum {(product of the number of inputs and outputs), 3·(number of inputs + number of outputs)}.

Classic DEA models consider DMUs with as efficient and DMUs with as inefficient. It is within the realm of possibility that multiple DMUs could be efficient; that is, this would not be a good discrimination of DMUs. In order to deal with this limitation, Andersen and Petersen [19] proposed the concept of superefficiency in order to help differentiate DMUs which present .

For the superefficiency evaluation to be employed in DEA-BCC models, constraint (6) must be removed in order for the DMU under analysis to attain greater scores than 1, being that the DMUs considered inefficient in traditional evaluation models will continue to be inefficient, but those which posed scores equal to one will be able to demonstrate scores above 1, thus enabling the elaboration of a ranking.

##### 2.2. Fuzzy-DEA

Hatami-Marbini et al. [21] undertook an expansive literature review about the Fuzzy Theory combined with DEA models. The authors’ motivations were based on the fact that, in general, the estimation of input and output DMU values in real problems is difficult, which could generate efficiency values with a low level of reliability; one approach to deal with the aspects of uncertainty involves the adoption of Fuzzy Theory concepts.

Kao and Liu [13] assert that measuring a DMU’s efficiency is a difficult task which involves complex economic variables such as interest rates and tax rates and employment levels and demand. According to these authors, efficiency measurement becomes even more difficult when analyzing multiple inputs and outputs.

In this context, Wen et al. [22] comment that DMUs can be classified in two categories: efficient and inefficient. However, the incorporation of uncertainty as an error in measurement of inputs and outputs can make the calculation of efficiency more reliable and robust.

Fuzzy Theory has come to be used with the objective of modeling uncertainty in DEA models [23]. Fuzzy-DEA models are based on Fuzzy Linear Programming, with DEA-BCC models with Fuzzy coefficients having special emphasis in this paper (Fuzzy-DEA-BCC), proposed by Kao and Liu [13]. Consider the following: subject to:

The value of the objective function (10) may have a greater value than 1 due to constraints (11)-(12) which involve Fuzzy parameters and are solved using probability [23]. With the incorporation of Fuzzy coefficients, DEA-BCC models cannot be resolved using traditional linear programming (LP) techniques. Hatami-Marbini et al. [21] list and describe the following main approaches which deal with Fuzzy-DEA:(i)the -level based approach;(ii)the tolerance approach;(iii)the Fuzzy ranking approach;(iv)the possibility approach.

In this study, the approach based on the -level was adopted and is described afterwards. This -level application is the most common for Fuzzy-DEA models according to Hatami-Marbini et al. [21]. To apply this method, the idea is to convert the Fuzzy-DEA model into a pair of Parametric Programming Problems to find upper and lower boundaries for the DMU efficiency score membership functions [23].

For the purposes of this research, triangular membership functions were utilized. According to Liang and Wang [24], such functions well represent human expertise in adequately judging the behavior of common variables in a range of practical situations. Along these same lines of thought, Aouni et al. [25] show multiple applications for Fuzzy triangular numbers which validate and justify the adoption of such a method in conjunction with goal programming (GP) models. Another justification arises from the fact that it is a linear function, thus easing the optimization process through traditional LP means [21].

To illustrate the process of Fuzzy-DEA modeling, the example in Kao and Liu’s [13] paper was used, in which they dealt with a model with a variable return of scale. Figure 1 shows the DEA-CCR and DEA-BCC models with four DMUs, denominated as A, B, C, and D, with only a single input and a single output for each DMU and with the output of DMU B being Fuzzy.

The input values for DMUs A, B, C, and D are 10, 20, 30, and 50, respectively. With these inputs, outputs 5, 9, and 15 from DMUs A, C, and D are also produced, respectively. These inputs and outputs are associated with points A = (10; 5), C = (30; 9), and D = (50; 15) in Figure 1. For DMU B, the output associated with input value 20 is represented by the trapezoidal Fuzzy number (5; 6; 8; 9), which is also illustrated in Figure 1 by points (20; 5), (20; 6), (20; 7.5), (20; 8), and (20; 9).

Based on Figure 1 and according to the DEA-BCC model [13], when the output of DMU B is less than or equal to 7.5, the production boundary is defined by the line segments connecting point (10; 0) to A = (10; 5) and from A to D = (50; 15). The efficiency scores of DMUs A, C, and D are 1, 0.9, and 1, respectively; the efficiency of DMU B is between 5/7.5 = 0.67 and 7.5/7.5 = 1, depending on its output.

If an increase in the output associated with DMU B went from 7.5 to 9, the production boundary would be represented by the line segments linking points (10; 0) to A from A to B = (20; 9) and from B to D = (50; 15). In this case, DMUs A, B, and D have an efficient output bound, while the efficiency of C will be, according to output orientation, between values of 9/10= 0.9 and 9/11 = 0.82.

In this example, regardless of the output value of DMU B, DMUs A and D are always efficient. In other words, with the combination (generation of scenarios) of each Fuzzy output for DMU B, the effects of uncertainty on DMUs A and D are always the same and present 100% efficiency.

Conducting an analysis under the assumption of constant return of scale (DEA-CCR model), as seen in Figure 1, the production boundary will be the solid line which links the origin point (0; 0) with point A = (10; 6). In this case, only DMU A will always be efficient (with an efficiency score of 100%), regardless of the output value of DMU B. However, the efficiency of DMU B will vary from 1/2 = 0.5 to 9/10 = 0.9, and DMUs C and D will have an efficiency of 3/5 = 0.6; that is, there are not effects of the Fuzzy output of DMU B over the efficiencies of DMUs C and D.

As a further example, the membership function linked to the output of DMU B may be illustrated, given that it is a trapezoidal function, shown by the following equation:

The -level base is defined as the interval , being , respectively. The upper and lower bounds for the variation (uncertainty) of -level, which are associated with a scenario generation, vary between pessimistic () and optimistic () for an efficiency analysis.

As previously stated, the -level based approach was adopted which, according to Hatami-Marbini et al. [21], is the most popular Fuzzy-DEA model with many referenced publications [13]. The value of allows the generation of scenarios, that is, different efficiency values respecting the variation range determined by the membership function. In such models, and are, respectively, the Fuzzy parameters for th input and th output of the th DMU. These values are approximately known and can be represented by Fuzzy sets through means of membership functions and . Thus, Fuzzy-DEA models can be formed using and which are the values of these parameters for a given scenario. With the -level which generates a set of scenarios for and , the formulas, as defined by Kao and Liu [13], are as follows:

Fuzzy-DEA can be transformed into a family of DEA models with different levels of uncertainty: and . The results for each scenario identify the uncertainty variation range in the model’s input and output data [13, 26–28], with the -level being defined by the following equation, according to Kao and Liu [13]:

Kao and Liu [13], based on Yager [29], Zadeh [28], and Zimmermann [30], established that a membership function which defines the efficiency of DMU can be expressed by with obtained by (4)–(9). The approach for constructing the membership function proposed in this study adopted an -level as being the value of efficiency in each scenario obtained by (10)–(14). For further details, Kao and Liu [13], Hatami-Marbini et al. [21], and Kao and Lin [31] are recommended reading materials.

Based on the models developed by Banker et al. [12], in accordance with (4)–(9), and Kao and Liu [13], in accordance with (10)–(14), the Fuzzy-DEA-BCC model was developed. Afterwards, the indices, parameters, auxiliary variables and decision variables, objective functions, and model constraints are proposed, considering DMU_{0} as the one under analysis.(i)Indices:(a) is the DMU index.(b) is the output index.(c) is the input index.(ii)Parameters:(a) and are, respectively, the upper and lower bounds in the definition intervals of the triangular membership function for the th Fuzzy output and the th Fuzzy input for DMU_{0}, considering the average as the most probable value without uncertainty.(b) and are, respectively, the upper and lower bounds in the definition intervals of the triangular membership function for the th Fuzzy output and the th Fuzzy input for DMU_{0}, considering the average as the most probable value without uncertainty.(c) is the lower bound for the definition interval of the triangular membership function of the th Fuzzy output for the th DMU, considering the average as the most probable value without uncertainty.(d) is the upper bound of the triangular membership function definition interval of the th Fuzzy output for the th DMU, considering the average as the most probable value without uncertainty.(e) is the lower bound of the triangular membership function definition interval of the th Fuzzy output for the th DMU, considering the average as the most probable value without uncertainty.(f) is the upper bound of the triangular membership function definition interval of the th Fuzzy input for the th DMU, considering the average as the most probable value without uncertainty.(g) is the value chosen for the -level based approach, with variation .(h) is the coefficient in constraints linked to the th Fuzzy input for DMU_{0}.(i) is the coefficient in constraints linked to the th Fuzzy output for DMU_{0}.(j) is the coefficient for constraints linked to the th Fuzzy output of the th DMU.(k) is the coefficient for constraints linked to the th Fuzzy input of the th DMU.(iii)Decision variables:(a) is the weight associated with the th output.(b) is the weight associated with the th input.

The Fuzzy-DEA-BCC model for an efficiency analysis of the DMUs in a pessimistic scenario is as follows:

The Fuzzy-DEA-BCC model for an efficiency analysis of the DMUs in an optimistic scenario is as follows:

Figure 2 geometrically contemplates the position of the DEA models parameters via the triangular membership function. or and or correspond to the lower variation bound associated with inputs and outputs of the DMUs. or and or correspond to the upper variation bound of the DMU’s inputs and outputs. The mean value of the triangular membership function is associated with the values of these parameters in a scenario without uncertainty.

#### 3. Problem Description and Modeling

##### 3.1. Proposal of Integrating Fuzzy-DEA with Simulation Optimization Problems

This paper’s proposal of integrating Fuzzy-DEA-BCC models in simulation optimization problems is based on the following four techniques:(i)discrete-event simulation to represent the real system to be optimized and conduct scenario simulation and data collection;(ii)the Taguchi orthogonal arrays to generate experimental matrices and define simulation runs to be executed in order to represent the search space;(iii)Fuzzy-DEA-BCC to analyze the efficiency of each generated scenario, taking into account the uncertainty present in each of them and classifying them in terms of the most efficient DMUs;(iv)an optimization procedure via simulation which can perform searches for optimal solutions.

The use of optimization assumes that the simulation model is constructed, verified, and validated, thus assuring that the model adequately simulates the reality of the phenomenon under study. It is also suggested that the response variables are either discrete or integers. The steps for their use are presented in Figure 3.

The application phases for the proposed procedure are described below.

*Step 1*. Define the simulation model decision variables () and the variation ranges for each variable (lower level ≤ ≤ upper level, com ).

*Step 2*. Determine the output variables (one or more) to be optimized ().

*Step 3*. Select the Taguchi orthogonal arrays in function of the number of decision variables and their variation limits. This selection must obey the fundamental rule established for the minimum number of DMUs to be analyzed through Fuzzy-DEA-BCC analysis [16]. After the arrays selection, generate an experimental matrix which represents the most diversified solution region as possible and explore all levels of each decision variable, if possible.

*Step 4*. Execute the experiments in a discrete-event simulator and store the maximum, minimum, and mean values for each output variable to be optimized for analysis.

*Step 5*. Fuzzy analysis is as follows. Based on the experiments carried out in the previous step, insert the minimum, maximum, and mean values for each experiment in the triangular membership function. The choice of this membership function is based on comments by Liang and Wang [24] who justify the use of Fuzzy triangular functions, given that they suitably mirror human judgments. Thus, according to these authors, the advantage of using triangular Fuzzy number lies in the fact that not only human expertise can be suitably represented, but also the model additionally accounts for uncertainty in the involved data and parameters. The optimization models dealing with uncertainty which contain present and future information cannot be perfectly accounted for and should be considered as uncertain [32]. This study utilized the -level based approach given that, according to Wang and Liang [32], it is the most popular Fuzzy-DEA model due to the great number of publications using Fuzzy-DEA in current scientific literature [21]. For more information on these approaches, please see Hatami-Marbini et al. [21].

*Step 6*. Determination of efficiency for each scenario by means of Fuzzy-DEA-BCC for the simulated results is as follows. Upon the definition of maximum, minimum, and mean values linked to each membership function associated with each analyzed condition, the Fuzzy-DEA-BCC model was applied using the -level based approach, varying the value (), carrying out 11 scenarios with different superefficiency values for both pessimistic and optimistic scenarios.

*Step 7*. Ranking the most efficient DMUs based on the concept of superefficiency is as follows. Given that there are 11 pessimistic scenarios and 11 optimistic scenarios, it was necessary to lump them together into a global scenario. The adopted method was to extract the geometric average of each scenario, thus arriving at a global scenario. Thus, the geometric averages of all linked scenarios were obtained and ranked from greatest to lowest. The first and second positions in the rank were chosen to reduce the range for each decision variable and, in turn, carry out the optimization simulation.

*Step 8*. Reducing the variation range of each decision variable is as follows. The first and second positions of the rank were chosen to carry out the range reduction of each variable, thus taking out those variables with equal values for both DMUs. This was the value adopted for that variable.

*Step 9*. Optimize the simulation model with a new variation range for each decision variable.

*Step 10*. Analyze the results of the optimization and make decisions based on the results found.

In order to exemplify the application of the proposed procedure, real situations involving the optimization of two simulation models are presented in the following sections. The utilized models were previously verified and validated, thus proving to be apt for optimization of the simulation.

#### 4. Application and Data Analysis

##### 4.1. Monoobjective Case

The modeled situation corresponds to a quality control cell in a fiber-optic transponder company. The cell is responsible for a series of tests which will ensure the approval or failure of the equipment produced at the plant. All verification and validation phases were appropriately undertaken, thus providing a consistent model.

For this study object, the decision variables were defined as the number of operators responsible for carrying out quality control tests, denoted by , and the number of pieces of equipment for test types 1, 2, and 3, denoted by . All variables were defined as integers, with a lower bound equal to one and an upper one equal to five. Table 1 presents this information.

The optimization objective was to find the combination of decision variables which would maximize the total number of inspected products in the quality control center, denoted by (). For the problem in question, considering the number of decision variables and their maximum variation (1–5), there are a total of 15.625 = (5^{6}) possible scenarios for the search space.

Considering the quantity of decision variables, the variation of levels of each variable, and the rule for the minimum number of DMUs proposed by Cooper et al. [16], the orthogonal array L25 was chosen. Seeing that there are six input variables and one output variable, there would be a minimum of 21 DMUs (runs) by following the classic rule, justifying the use of array L25. With a defined array, the experimental matrix was generated and presented in Table 2.

In the following, 25 scenarios were simulated from array L25 with 30 replications of a month’s time of operation in quality control. The simulations were carried out on a computer with an Intel processor (Core 2 Duo) 1.58 GHZ, a 2 GB RAM, and a 64-bit Microsoft operational system platform.

Data for each output variable were stored for the superefficiency calculations. Nearly 26 minutes were spent to process the 25 scenarios, considering all replications. Maximum, minimum, and average values were stored for 30 replications for each scenario with the goal of using them in the triangular membership function. Results for output variable are shown in Table 2.

For the calculation of superefficiency related to each DMU with a Fuzzy-DEA-BCC model, the software* The General Algebraic Modeling* (GAMS) [33] was used, version 22.8.1, using the solver CPLEX, version 11.0, adapted for the specific calculation.

With these results, the superefficiency value of the DMU can be related to each scenario for the pessimistic and optimistic scenarios by means of an variation, summing up to 11 scenarios. These values are presented in Tables 3 and 4.

For the calculation of superefficiency for each scenario, the average geometric value was extracted for both pessimistic and optimistic scenarios. The results found are presented in Table 5. With these data, it was possible to calculate the average of each DMU and then rank them in function of their superefficiency values (see Table 5).

Through a superefficiency analysis, it was possible to rank the DMUs in order of efficiency. For the problem in question, DMU 1 is the most efficient, followed by DMU 6. Both are highlighted in Table 5.

With the identification of the two most efficient DMUs and based on the experimental matrix in Table 2, a new interval can be identified for each decision variable in which a better set of solutions is expected. The new intervals for each decision variable are presented in Table 6. Variable stands out, as its value was already defined, being equal to , reducing the number of decision variables to 5.

With the reduction of the variation interval for each decision variable, the search space for the best solution was reduced from 15,625 to 240—a reduction of 98.4%.

To confirm the efficiency of search space reduction, the optimizer SimRummer [34] was utilized for the simulation model optimization. SimRunner stands as popular optimization software [35] in operation and being sold in conjunction with the ProModel simulator. According to Kim et al. [36] and Banks et al. [4], this optimization software finds an optimal solution using as its search method a metaheuristic called Genetic Algorithm that as pointed by Ólafsson [37] mimics the process of natural selection.

The optimizer was set to the same conditions and objectives; however, two optimizations were conducted for the same problem. One used a reduction in the variation interval (Table 6), and the other one used the original problem (Table 1) conditions. The results found can be seen in Table 7.

The responses presented by the optimizer for the decision variables were only equal for variables and . For the other variables, the optimizer arrived at different values. As for the solution of , with search space reduction, the optimizer reached a value statistically equal to both optimized cases.

The optimizer for the optimization problem with its range being reduced carried out 88 experiments before converging, which is equal to 36.67% of the experimental area (240 scenarios), taking a little more than 2.25 hours. For the problem with original range settings, the optimizer carried out 183 experiments, equal to slightly more than 1.17% of the total experimental area (15,625 scenarios), taking 4.8 hours.

##### 4.2. Multiobjective Cases

The second study object represents a production cell in a Brazilian telecommunications company which produces fiber-optical equipment. This model, which is similar to the previous case, had already passed through the phases of verification and validation prior to being used for this paper in the simulation optimization technique. For the presented study object, the decision variables were defined as the number of pieces of inspection equipment, types 1 and 2, denoted by , and the number of employees, types 1, 2, and 3, denoted by , who carry out the activities presented in the model. The variables were defined as integers, with a lower bound of 1 and an upper bound of 5. Table 8 presents this information.

The optimization objective was to find the combination of decision variables which would maximize throughput () and production cell profit (). It can be seen for the problem in question that, considering the number of variables and their maximum variation (1–5), there are a total of 3,125 = (5^{5}) possible scenarios for the search space for the best configuration.

In order to meet the fundamental rule for the number of DMUs, an L25 orthogonal array was used. An experimental matrix was generated which is presented in Table 9. The scenarios for the experimental matrix were simulated in ProModel.

Thirty replications were simulated for each scenario, which corresponded to a month of operation in the production cell, and the data for each output variable were stored for the superefficiency calculation. Simulation of the 25 scenarios and their 30 replications took slightly more than 16.25 minutes. The results (minimum, maximum, and mean values) for the outputs and for each DMU are shown in Table 9.

For the calculation of superefficiency related to each DMU for the Fuzzy-DEA-BCC model, the same procedure was employed. With the output variable results, each scenario can be related to each DMU superefficiency value for pessimistic and optimistic scenarios. In the following, the scenarios are lumped into a geometric average. The final result is presented in Table 10. Thus, the average can be calculated for each DMU, making it possible to rank them in terms of superefficiency.

Based on the superefficiency value rankings (Table 3) in this study, DMU 2 proved to be the most efficient followed by DMU 17. Both are highlighted in Table 10.

Once these two DMUs are identified as the most efficient, a new interval for each decision variable can be redefined. In doing so, the optimization search space is reduced. Table 9 shows that variable presented the same value for the most efficient DMUs, thus causing its value to be defined as , reducing the number of variables from five to four. The new intervals for the other decision variables are presented in Table 11.

By reducing the variation interval for each decision variable, the search space for the optimal solution was reduced from 3,125 to 64—a reduction of nearly 98%.

To test the efficiency and robustness of the new search space, SimRunner was set to carry out the simulation model optimization, aiming to maximize total cell production and total profit , for both the original variation range (Table 8) and the reduced variation range (Table 11). Results are seen in Table 12.

The responses presented by the optimizer for the decision variables were only equal for variable . For the other decision variables, the optimizer arrived at different values. In the case of the original variation range, the responses indicated the need to hire more employees and purchase more equipment, except for variable . As for the solutions of and , the results were statistically equal for a confidence level of 95%.

Continuing the analysis, the optimizer took 38.25 minutes to execute 51 experiments before converging in the case of the reduced range problem which is equal to approximately 80% of the reduced experimental area. In the case of the original variation range, the optimizer took 2.1 hours to carry out 168 experiments before converging, which is equal to about 5.3% of the total experimental area.

As it can be seen in Figure 4 for both models tested with the proposed optimization procedure, the output variable responses were statistically identical, while the time needed to reach these results dropped considerably for reduced ranges.

#### 5. Conclusions and Recommendations for Future Research

This paper has proposed a method for reducing search space for simulation optimization problems which has provided search space reductions of roughly 98% and significant reductions in convergence time without compromising the response quality.

These results were reached through the proposal of using the Fuzzy Theory along with a DEA-BCC model which enabled scenario analysis in the face of uncertainty, which is a common reality for discrete-event simulation models, considering that they most commonly deal with stochastic, dynamic, and interrelated environments.

With an output variable analysis taking uncertainty into account, the use of Fuzzy-DEA-BCC allowed an efficiency analysis of pessimistic and optimistic scenarios. This, in turn, enabled the ranking of the most efficient DMUs.

Upon determining the two most efficient DMUs, a new range for each decision variable was established, permitting the optimization software to concentrate on the region of the greatest efficiency, according to the analysis conducted with Fuzzy-DEA-BCC.

As a means of validating this paper’s proposal, a widely used commercial optimizer was used to check whether the method was indeed able to limit the search region to the area containing the best solutions or not. In order to do so, the optimizer ran the simulation model under both variations: original and reduced with Fuzzy-DEA-BCC. For both study objects, the method was able to provide a response of equal quality from the reduced search space when both variation ranges were compared. There was a significant reduction in convergence time with reduced search space.

Finally, it is worth mentioning that the Taguchi arrays proved to be practical and reliable, as they represented the search space, exploring the maximum possible diversity of levels present in each decision variable, which would have been difficult using classical experimental techniques that use two or three levels.

The possibilities for future research include(i)utilizing GPDEA-BCC [38] which improves DEA discrimination even without a number of DMUs to meet Cooper, Seiford, and Tone’s rule [16];(ii)conducting tests with continuous decision variables;(iii)applying other arrays or strategic experiments which substitute the necessity for orthogonal arrays;(iv)testing the method proposed in this paper, using other optimizers, such as multiple comparison procedure, optimal computing budget allocation, and nested partitions that seek to increase the efficiency of the optimization process.

#### Conflict of Interests

The authors declare that there is no conflict of interests regarding the publication of this paper.

#### Acknowledgments

The authors would like to thank CNPq, CAPES, FAPEMIG, and the Doctoral Program in Production Engineering at Federal University of Itajubá (UNIFEI) for supporting this research.

#### References

- M. C. Fu, “Optimization for simulation: theory vs. practice,”
*INFORMS Journal on Computing*, vol. 14, no. 3, pp. 192–215, 2002. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet - F. S. Hillier and G. J. Lieberman,
*Introduction to Operations Research*, McGraw-Hill, New York, NY, USA, 9th edition, 2010. - M. C. Fu, S. Andradottir, J. S. Carson et al., “Integrating optimization and simulation: research and practice,” in
*Proceedings of the Winter Simulation Conference*, pp. 610–616, Orlando, Fla, USA, December 2000. View at Scopus - J. Banks, J. S. Carson II, B. L. Nelson, and D. M. Nicol,
*Discrete Event Simulation*, Prentice-Hall, Upper Saddle River, NJ, USA, 4th edition, 2005. - A. Azadeh, M. Tabatabaee, and A. Maghsoudi, “Design of intelligent simulation software with capability of optimization,”
*Australian Journal of Basic and Applied Sciences*, vol. 3, no. 4, pp. 4478–4483, 2009. View at Google Scholar · View at Scopus - A. L. Medaglia, S.-C. Fang, and H. L. W. Nuttle, “Fuzzy controlled simulation optimization,”
*Fuzzy Sets and Systems*, vol. 127, no. 1, pp. 65–84, 2002. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet · View at Scopus - J. April, F. Glover, J. P. Kelly, and M. Laguna, “Practical introduction to simulation optimization,” in
*Proceedings of the Winter Simulation Conference: Driving Innovation*, pp. 71–78, New Orleans, La, USA, December 2003. View at Scopus - J. Banks, “Panel session: the future of simulation,” in
*Proceedings of the Winter Simulation Conference*, pp. 1453–1460, Arlington, Va, USA, December 2001. View at Scopus - C. R. Harrel, B. K. Ghosh, and R. Bowden,
*Simulation Using ProModel*, McGraw-Hill, New York, NY, USA, 2004. - J. P. C. Kleijnen, W. van Beers, and I. van Nieuwenhuyse, “Constrained optimization in expensive simulation: novel approach,”
*European Journal of Operational Research*, vol. 202, no. 1, pp. 164–174, 2010. View at Publisher · View at Google Scholar · View at Scopus - G. Taguchi,
*System of Experimental Design: Engineering Methods to Optimize Quality and Minimize Costs*, UNIPUB/Kraus International Publications, Dearborn, Mich, USA, 1987. - R. D. Banker, A. Charnes, and W. W. Cooper, “Some models for estimating technical and scale inefficiencies in data envelopment analysis,”
*Management Science*, vol. 30, no. 9, pp. 1078–1092, 1984. View at Google Scholar · View at Scopus - C. Kao and S.-T. Liu, “Fuzzy efficiency measures in data envelopment analysis,”
*Fuzzy Sets and Systems*, vol. 113, no. 3, pp. 427–437, 2000. View at Google Scholar · View at Scopus - P. J. Ross,
*Taguchi Techniques for Quality Engineering*, McGraw-Hill, New York, NY, USA, 1996. - A. Charnes, W. W. Cooper, and E. Rhodes, “Measuring the efficiency of decision making units,”
*European Journal of Operational Research*, vol. 2, no. 6, pp. 429–444, 1978. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet · View at Scopus - W. W. Cooper, L. M. Sieford, and K. Tone,
*Data Envelopment Analysis: A Comprehensive Text with Models, Application, References and DEA-Solver Software*, Springer Science + Business Media, New York, NY, USA, 2nd edition, 2007. - W. D. Cook and L. M. Seiford, “Data envelopment analysis (DEA)—thirty years on,”
*European Journal of Operational Research*, vol. 192, no. 1, pp. 1–17, 2009. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet · View at Scopus - S.-J. Weng, B.-S. Tsai, L.-M. Wang, C.-Y. Chang, and D. Gotcher, “Using simulation and data envelopment analysis in optimal healthcare efficiency allocations,” in
*Proceedings of the Winter Simulation Conference (WSC '11)*, pp. 1295–1305, Phoenix, Ariz, USA, December 2011. View at Publisher · View at Google Scholar · View at Scopus - P. Andersen and N. C. Petersen, “A procedure for ranking efficient units in data envelopment analysis,”
*Management Science*, vol. 39, pp. 1261–1264, 1993. View at Google Scholar - M. Xue and P. T. Harker, “Note: ranking DMUs with infeasible super-efficiency DEA models,”
*Management Science*, vol. 48, no. 5, pp. 705–710, 2002. View at Google Scholar · View at Scopus - A. Hatami-Marbini, A. Emrouznejad, and M. Tavana, “A taxonomy and review of the fuzzy data envelopment analysis literature: two decades in the making,”
*European Journal of Operational Research*, vol. 214, no. 3, pp. 457–472, 2011. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet - M. Wen, Z. Qin, and R. Kang, “Sensitivity and stability analysis in fuzzy data envelopment analysis,”
*Fuzzy Optimization and Decision Making*, vol. 10, no. 1, pp. 1–10, 2011. View at Publisher · View at Google Scholar · View at Scopus - S. Lertworasirikul, S.-C. Fang, J. A. Joines, and H. L. W. Nuttle, “Fuzzy data envelopment analysis (DEA): a possibility approach,”
*Fuzzy Sets and Systems*, vol. 139, no. 2, pp. 379–394, 2003. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet · View at Scopus - G.-S. Liang and M.-J. J. Wang, “Evaluating human reliability using fuzzy relation,”
*Microelectronics Reliability*, vol. 33, no. 1, pp. 63–80, 1993. View at Publisher · View at Google Scholar · View at Scopus - B. Aouni, J. M. Martel, and A. Hassaine, “Fuzzy goal programming model: an overview of the current state-of-the art,”
*Journal of Multi-Criteria Decision Analysis*, vol. 16, pp. 149–161, 2009. View at Google Scholar - A. Kaufmann,
*Introduction to the Theory of Fuzzy Subsets*, Academic Press, New York, NY, USA, 1975. View at MathSciNet - L. A. Zadeh, “Outline of new approach to the analysis of complex systems and decision processes,”
*IEEE Transactions on Systems, Man and Cybernetics*, vol. 3, no. 1, pp. 28–44, 1973. View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet · View at Scopus - L. A. Zadeh, “Fuzzy sets as a basis for a theory of possibility,”
*Fuzzy Sets and Systems*, vol. 1, no. 1, pp. 3–28, 1978. View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet · View at Scopus - R. R. Yager, “A characterization of the extension principle,”
*Fuzzy Sets and Systems*, vol. 18, no. 3, pp. 205–217, 1986. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet · View at Scopus - H.-J. Zimmermann,
*Fuzzy Set Theory—and Its Applications*, Kluwer-Nijhoff Publishing, Boston, Mass, USA, 2nd edition, 1991. - C. Kao and P.-H. Lin, “Efficiency of parallel production systems with fuzzy data,”
*Fuzzy Sets and Systems*, vol. 198, pp. 83–98, 2012. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet · View at Scopus - R.-C. Wang and T.-F. Liang, “Application of fuzzy multi-objective linear programming to aggregate production planning,”
*Computers and Industrial Engineering*, vol. 46, no. 1, pp. 17–41, 2004. View at Publisher · View at Google Scholar · View at Scopus - The General Algebraic Modeling—GAMS, July 2013, http://www.gams.com/.
*SimRunner User Guide*, ProModel Corporation, Orem, Utah, USA, 2006.- S. S. Nageshwaraniyer, Y. J. Son, and S. Dessureault, “Simulation-based optimal planning for material handling networks in mining,”
*Simulation: Transactions of the Society for Modeling and Simulation International*, vol. 89, pp. 330–345, 2013. View at Google Scholar - W. K. Kim, K. P. Yoon, Y. Kim, and G. J. Bronson, “Improving system performance for stochastic activity network: a simulation approach,”
*Computers and Industrial Engineering*, vol. 62, no. 1, pp. 1–12, 2012. View at Publisher · View at Google Scholar · View at Scopus - S. Ólafsson, “Metaheuristics,” in
*Handbooks in Operations Research and Management Science*, S. G. Henderson and B. L. Nelson, Eds., pp. 633–654, Elsevier, 2006. View at Publisher · View at Google Scholar · View at Scopus - H. Bal, H. H. Örkcü, and S. Çelebioğlu, “Improving the discrimination power and weights dispersion in the data envelopment analysis,”
*Computers & Operations Research*, vol. 37, no. 1, pp. 99–107, 2010. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet