Abstract

Data Envelopment Analysis (DEA) is a nonparametric technique to estimate the current level of efficiency of a set of entities. DEA also provides information on how to remove inefficiency through the determination of benchmarking information. This paper is devoted to study DEA models based on closest efficient targets, which are related to the shortest projection to the production frontier and allow inefficient firms to find the easiest way to improve their performance. Usually, these models have been solved by means of unsatisfactory methods since all of them are related in some sense to a combinatorial NP-hard problem. In this paper, the problem is approached by genetic algorithms and parallel programming. In addition, to produce reasonable solutions, a particular metaheuristic is proposed and checked through some numerical instances.

1. Introduction

In the literature, technologies have been estimated using many different approaches over the past 50 years [1]. The two principal methods are stochastic frontiers, which resort to econometric techniques, and Data Envelopment Analysis (DEA), which involves mathematical programming models. In particular, Data Envelopment Analysis (DEA) is a nonparametric technique based on mathematical programming for the evaluation of technical efficiency of a set of decision making units (DMUs) that consumes inputs to produce outputs [2]. In contrast to other efficiency methodologies, for example, stochastic frontiers, DEA provides simultaneously both an efficiency score and benchmarking information through efficient targets. These two pieces of information are usually inseparable in DEA. Indeed, the efficiency score is obtained from the distance between the assessed DMU and a point on the frontier of the technology, which serves as efficient target for the assessed DMU. Information on targets can play a relevant role since they indicate keys for inefficient units to improve their performance.

Traditional DEA measures of technical efficiency yield targets that are associated with the furthest efficient projection to the evaluated DMU [3]. However, several authors (see [3, 4] to name but a few) argue that the distance to the efficient projection point should be minimized, instead of maximized, in order for the resulting targets to be as similar as possible to the observed inputs and outputs of the evaluated DMU. The cornerstone of this approach is that closer targets suggest directions of improvement for inputs and outputs that lead to the efficiency with less effort than other alternatives.

In general, closest targets are easily attainable and provide the most relevant solution to remove inefficiency from the process of production. The determination of closest targets has attracted an increasing interest of researchers in recent DEA literature [47]. In particular, revising the literature we can find that Frei and Harker [8] determined closest targets by minimizing the Euclidean distance to the efficient frontier. Gonzalez and Alvarez [9] proposed to minimize the sum of input contractions required to reach the boundary of the technology. Portela et al. [4] used the software QHull, which allows determining all facets of a polyhedron, with the aim of getting closest targets. More recently, Baek and Lee [10], Pastor and Aparicio [7], Amirteimoori and Kordrostami [5], and Aparicio and Pastor [6] focused their corresponding analysis on a weighted version of the Euclidean distance, while Jahanshahloo et al. [11] provided methods for both obtaining the minimum distance from DMUs to the frontier and evaluating group performance of DMUs. On the other hand, Briec [12], Briec and Lesourd [13], and Briec and Lemaire [14] obtained the minimum distance to the frontier using Hölder norms, introducing in this way the Hölder distance functions in the literature related to efficiency measurement. Coelli [15] proposed an alternative to the second stage of the process for solving radial models based on closest targets. Cherchye and Van Puyenbroeck [16] maximized the cosine of the angle between the input vectors of the evaluated DMU and its corresponding projection on the boundary. Finally, other related papers are those by Lozano and Villa [17], who determined a sequence of targets to be achieved in successive leaps which finally converge to the efficient frontier, and by Charnes et al. [18] and Takeda and Nishino [19] who use techniques for assessing sensitivity in efficiency classification in DEA based on minimizing distances.

Regarding papers that have studied the computational aspects of DEA models associated with the determination of closest targets, we may cite several references: Aparicio et al. [3] and Jahanshahloo et al. [11, 2022]. Overall, some of these approaches are based on Mixed Integer Linear Programming or Bilevel Linear Programming while others are derived from algorithms that allow the determination of all the facets of a polyhedron. As we will argue in detail in Section 2, all these approaches present some strong points and weaknesses and, consequently, currently there is no approach accepted as the best solution to the problem.

In view of this discussion, it seems that determining closest targets has been one of the important issues in the DEA literature (see Cook and Seiford [23], for a recent survey on DEA). Nevertheless, from a computational point of view, the problem, that is, the determination of closest targets, is difficult enough and this fact justifies the effort to apply new methods in order to overcome it.

In this paper, we use several genetic algorithms with the aim of determining closest targets in DEA. We apply this type of methodology to solve the aforementioned problem. In particular, we resort to the approach by Aparicio et al. [3], which is based on Mixed Integer Linear Programming, and try to find feasible solutions of the model that these authors introduced by means of genetic algorithms. The difficulty of the problem requires us to study the genetic algorithm by parts, incorporating restrictions while the results are analyzed.

The remainder of the paper is organized as follows. In Section 2, a brief introduction of the main notions associated with Data Envelopment Analysis is presented. Additionally, in this section the existing approaches for determining closest targets are outlined. In Section 3, the genetic algorithm is developed. After that, a random search is used to improve this algorithm, to obtain a hybrid metaheuristic. The idea is to use a random search method in some parts of the genetic algorithm to explore the space of feasible solutions better. To reduce the execution time, a parallel algorithm in shared memory is developed and studied. In Section 5, the results of some experiments are summarized. Finally, Section 6 concludes the paper.

2. Data Envelopment Analysis and Closest Targets

Recent years have seen a great variety of applications of DEA for the use in evaluating the efficiency of many different types of entities [2]. DEA models are based on mathematical programming. While other methods, as stochastic frontiers, need the specification of a functional form for the production function (such as the Cobb-Douglas form), DEA is a nonparametric technique that estimates a piecewise-linear convex technology without this requirement, constructed such that no observation of the sample of data lies above it (refer to Figure 1). In Figure 1, we represent a sample of several firms (DMUs) that use one input () to produce one output (). Each firm is represented by a point in the figure. We also show both the Cobb-Douglas function estimated from the data, if the analyst assumes such functional form, and the piecewise-linear convex estimation of the frontier of the technology resorting to DEA techniques.

DEA involves the use of Linear Programming to construct the nonparametric piecewise surface over the data. Technical efficiency measures associated with the performance of each DMU are then calculated relative to this surface, as a distance to it. Although Farrell [24] was the first to introduce these ideas in the literature, the method did not receive wide attention until the paper by Charnes et al. [25], in which the term Data Envelopment Analysis was coined. Since then there have been a large number of papers which have extended, adapted, and applied this methodology in the fields of economics, operations research, and management science.

Before continuing, we need to define some notations. Assume that there are data on inputs and outputs for each of DMUs. For the th DMU these are represented by , , and , , respectively.

There are a lot of DEA technical efficiency measures in the literature. The basic DEA models are the CCR [25] and the BCC [26]. Both models are based on radial projections to the production frontier. However, many other approaches give freedom to the projection so that the final efficient targets do not conserve the mix of inputs and outputs. In particular the enhanced Russel Graph measure [27] or slacks-based measure [28] may be calculated for DMU , , as follows:

Equation (1) is a linear fractional program that can be easily transformed into a linear program by a change of variables as described in Pastor et al. [27]. Specifically, described equation (1) is equivalent to the following linear program:

The Enhanced Russell Graph measure, defined as the optimal value of the above model, satisfies several interesting properties from a mathematical and economic point of view. However, it presents a weakness. In particular, (1), or equivalently (2), yields efficient targets, points onto the piecewise frontier, which are far away from the inputs and outputs of the evaluated DMU . To illustrate this idea, note that the targets for this model are defined as for inputs and for outputs and that the slacks, and , appear in the objective function. Consequently, and since we are minimizing, the model seeks slacks as large as possible and, therefore, (1) yields the furthest targets for DMU with original inputs and outputs . In order to determine closest targets instead of furthest targets, it seems enough to change “minimizing” by “maximizing.” However, it is not true. In this case, we could show that the targets generated by the model would be not technically efficient but inefficient [3]. And, therefore, they could not serve as benchmark for the assessed DMU.

This problem was the main reason behind the introduction of different approaches in the literature to determine closest targets. On the one hand, a group of the researchers [20, 21] focus their work on finding all the faces of the polyhedron that defines a technology estimated by DEA. For example, in Figure 1, we show five of these faces. The computing time of these algorithms increases significantly as the problem size grows since this issue is closely related to a combinatorial NP-hard problem. On the other hand, a second group proposes to determine closest targets through mathematical programming [3, 22]. In particular, Aparicio et al. [3] introduced the following Mixed Integer Linear Program to overcome the problem for DMU : Regarding (3), some comments are in order. First, note that (c.1)–(c.3) are the same constraints as those used in (2) and it implies that we are considering the same set of feasible points. However, note also that the objective function is maximized in this case instead of minimized. Second, with (c.4)–(c.6) we are considering supporting hyperplanes such that all the points of the estimated technology (a polyhedron) lie on or below these hyperplanes. Finally, (c.7) and (c.8) are the key conditions that connect the two previous sets of constraints. Specifically, they avoid that the targets correspond to interior points of the estimated technology. Finally, (c.9) defines as a binary variable and (c.10)–(c.14) are the usual nonnegative constraints.

A weakness of the approach in Aparicio et al. [3] is that it uses a “big ” to model the key constraints (c.7) and (c.8). Specifically, it allows to link to by means of the binary variable . However, the value of may be calculated if and only if we previously calculate all the faces that define the technology. Accordingly, this alternative is again associated with a combinatorial NP-hard problem. In the same manner, Jahanshahloo et al. [22] resorted to Lineal Bilevel Programming and a big to determine closest targets and, therefore, their approach presents a similar drawback.

In view of the preceding discussion, from a computational point of view, the determination of closest targets in DEA has not yet been satisfactorily solved and, consequently, it justifies the effort to apply new methods in order to overcome the problem. In this paper, we apply genetic algorithms to solve (3). In particular, and since the problem is really difficult, we will find valid solutions of (3) in the following sections.

3. Genetic Algorithms for Determining Closset Targets

Genetic algorithms [29] are used here to obtain a satisfactory solution of (3). A population of chromosomes representing particular valid solutions of (2) is explored. Each chromosome represents a candidate to be the best model and it is composed by , , , and with , , . An evaluation of each chromosome is calculated by using the objective function of (3). Consider

Algorithm 1 shows the scheme of a genetic algorithm. The function in the scheme and the values of some parameters for a basic genetic algorithm are shown in Algorithm 1.

(1) ( )
(2) while ( ) do
(3)   ( )
(4)   = the best ranking of
(5)   = and ( )
(6)   = ( )
(7) end while

3.1. Defining a Valid Chromosome

A valid chromosome has to satisfy at least the following constraints in (3): (c.2), (c.3), (c.8), (c.9), (c.10), (c.11), (c.12), and (c.14). Four methods to generate the initial population of chromosomes have been tested. Methods 1, 2, and 4 are different and Method 3 is an extension of Method 2.

Method 1. The parameters of the chromosomes are generated randomly.

Method 2. The process starts obtaining a random and a set of values using Algorithm 2. Next the values are generated considering in order to satisfy (c.8). The values and are deduced from inputs and outputs matrices and the previously generated and using (c.2) and (c.3). In case of obtaining a valid solution the algorithm ends, if not, Algorithm 3 is used.
Algorithm 3 decreases and increases in order to obtain the minimal that satisfy (c.11). Two parameters are used: a factor number and the maximal number of iterations of the process, which also determines the increasing or decreasing value of . To increase , a similar algorithm is used working in a similar way to decrease , but doing the add operation in the first inner loop and using the constraint (c.12) as a condition instead of (c.11) for both inner loops.

Require: ,  , DMU .
Ensure: ,  .
such as and
Sort and in decreasing.
for   :=   do
if and then
  
else if and then
Generate randomly.
else if  ( and ) or ( and
) then
Generate 0 randomly.
end if
end for
Find the minimum and modify its value in order to satisfy (3)

Require:  , , .
Ensure: A minimal for the chromosome that satisfy constraint 11: .
while   do
while   do
   {Decrease   while still satisfies (c.11)}
   
   Generate using expression (2).
end  while
{At this point (c.11) is not satisfied because of the decrease of  . Increase   value
until it is satisfied again}
repeat
   
   Generate using (3).
until 
end  while

Method 3. This method is an extension of Method 2 through adding a third tweak process for the (Algorithm 4).

Require:   , , , DMU ,
Ensure: Chromosome .
 repeat
Generate randomly.
Generate using Algorithm 2 and and using expression (2).
while and and  number of do
  if and then
  Increase
  end if
  if and then
  Decrease
  end if
  if or then
  Choose no null randomly and make it equal to zero, decreasing the rest
  in and find the minimum and modify its value in order to satisfy (3)
  end if
end while
 until  A valid chromosome is obtained or or  the number of
no null is lower than

Method 4. Method 4 consists in a hyperheuristic (metaheuristic that operates over some other metaheuristic) [30]. A genetic algorithm has been used to produce sets of and for the initial population, which satisfy (c.2), (c.3), (c.11), (c.12), and (c.14). The evaluation function in the hyper-heuristic is sum of the negative values of and , with the chromosome with the higher punctuation being a better candidate. Moreover and are considered, penalizing values close to 1.

3.2. Select the Best Ranking, Crossover, and Mutation

In each generation a proportion of the existing population is selected to breed a new generation. A comparison of the evaluations of all the chromosomes in the population is made in each generation, and only part of them (those which are in the best ranking) will survive. The number of chromosomes which survive in each population (called ) is preset. For each two new solutions to be produced (“son" and “daughter"), a pair of “parent" (“father" and “mother") chromosomes is selected from the set of chromosomes.

Due to the number of constraints to satisfy, a traditional crossover method will produce an offspring with a higher rate of nonvalid chromosomes. With the aim of avoiding this, a crossover with different “levels" is introduced in Algorithm 5. The defined levels are 1 for , 2 for , 3 for , and 4 for . A level is chosen randomly, and values in that level are crossed and those in lower levels are deduced to satisfy (3).

Require: Chromosome , Chromosome .
Ensure:  Chromosome , Chromosome
Generate randomly.
Generate randomly.
if then
  {Crossing
  
  
else if then
   {Crossing
for : do
   
  
end for
else if then
   {Crossing
for : do
   
  
end for
else if then
   {Crossing
  for : do
  
  
  end for
end for
Deduce , , and for and in order to satisfy (3)

In each iteration a chromosome is randomly chosen to be mutated. The procedure is similar to crossover, considering the dependences between variables and defining different levels of mutation.

4. Parallel Algorithms

The algorithms proposed for GA are very time-consuming when the problem size and increase. Thus, we propose parallel algorithms which aim to reduce the execution time. The parallelization is carried out on a shared-memory model but a design for distributed memory could be similarly generated. Thus, we assume a shared-memory computer with processors. The parallel programming environment that allows the use of this computer is OpenMP [31]. There are multithread implementations of the basic linear algebra library BLAS [32], which can be used by higher level libraries (LAPACK [33]) to take advantage of this architecture by using multithreading techniques. For example, the Intel MKL library and the ATLAS [34] package have efficient multithread versions of BLAS.

The costliest parts of the genetic algorithm are , , and and have been paralleled simply by assigning some chromosomes to each processor. The population is initialized only once, but it has a high cost and is also paralleled as above.

5. Experimental Results

Experiments have been carried out in a computer with Itanium-2 dual-core. The code has been written in C. We used the 11.1 version of Intel compiler and the Intel MKL library, version 10.2.2.025, which contains a multithread implementation of LAPACK.

Some experiments have been carried out to tune certain parameters of the algorithm. In all of them, , , and . Moreover, for all the experiments, the data have been simulated.

The experiments have three objectives. The first is to compare the four methods of creating valid chromosomes, the second is to study the genetic algorithm, and the third is to study the execution cost when parallel algorithm is used, all when varying the problem size.

Table 1 shows both the averaged execution time (in seconds) and the averaged percentage of valid chromosomes associated with the four aforementioned methods. Additionally, the corresponding standard deviations are shown as subscripts. Method 1 works very poorly for all the six instances. Method 2 performs better than Method 1. However, the averaged percentage of valid chromosomes decreases as the problem size increases, reaching a value of zero for the biggest numerical example. The performance of Method 4 is also worse than that corresponding to the second method, even with respect to the execution cost. On the other hand, Method 3 is clearly the best approach for determining valid chromosomes, although the size of the problem adversely affects its performance. Specifically, Figure 2 illustrates the superiority of the third method compared to the other three possibilities.

As for the genetic algorithm, Table 2 shows the averaged execution cost, the averaged percentage of valid chromosomes, and the averaged best solution (the objective function in (3)) when the genetic algorithm is used together with Method 3. The utilization of Method 3 is justified by its superiority compared to Methods 1, 2, and 4. Regarding the percentage of valid chromosomes, the algorithm generates about 84% in the best case , , and and about 30% in the worst case (for the two more complex instances). Although there is not a clear trend with respect to the size of the analyzed problems, the results show that the averaged percentage of valid chromosomes is smaller for , , and and , , and . Moreover, though the last instance , , and is considerably more complex than the remaining simulations, the genetic algorithm does not work worse than in the preceding numerical example.

Table 3 shows the execution time and speedup for the parallel version of genetic algorithm. In general, the speedup values are satisfactory being better when the problem size increases. In small problems, the speedup when using 16 processors is lower than that obtained when using 8 or less. Therefore the selection of the number of processors is essential for resource optimization.

6. Conclusions and Future Works

Determining benchmarking information through closest efficient targets is one of the relevant topics in the recent Data Envelopment Analysis literature. However, from a computational point of view, it has been solved by unsatisfactory approaches since all of the existing methods, as was argued, are related to a combinatorial NP-hard problem.

In this paper, for the first time, the determination of closest targets has been approached by means of genetic algorithms and parallel programming. Specifically, a first step has been carried out to solve the problem obtaining valid solutions for the mathematical programming model proposed by Aparicio et al. [3]. At this respect, four different methods to generate the initial population of chromosomes were used and tested by means of some simulated experiments. The third method, based on three algorithms to yield the suitable parameters of the problem, clearly presented the best performance. In a subsequent phase, this third method was used in the generation of valid chromosomes in the genetic algorithm. Finally, the execution time and speedup for the parallel version of the genetic algorithm were shown, demonstrating in particular that the speedup was better when the problem size increased.

In this paper, the introduced approach only takes into account eight constraints on a total of fourteen in (3). Although the complexity of the problem justifies this first step, the development of a method considering the remaining restrictions may be a good avenue for further follow-up research. In particular, improving the method to generate the initial population of chromosomes and testing different heuristics could help to achieve this specific objective. A deeper study is also in mind of the relation between the different initial parameters and the size of the problem with the computing time required and the effectiveness of the algorithm. Finally, it is worth mentioning that (3) was associated with the Enhanced Russell Graph measure [27]. However, there are a lot of measures in DEA that can be used for measuring technical efficiency through closest targets. Therefore, programming the approach based on the genetic algorithm to solve all of them can be seen as a suitable future work.

Conflict of Interests

The authors declare that they have no conflict of interests regarding the publication of this paper.

Acknowledgments

The authors gratefully acknowledge the computer resources, technical expertise, and assistance provided by the Parallel Computing Group of the Murcia University. Additionally, Juan Aparicio and Jose Lopez-Espin are grateful to the Generalitat Valenciana for supporting this research with Grant no. GV/2013/112.