Abstract

Data envelopment analysis (DEA) has been extended to cross-efficiency to provide better discrimination and ranking of decision-making units (DMUs). Current researches about cross-efficiency mainly focus on the non-uniqueness of optimal solution of linear programming and information aggregation. As a common distance metric, standardized Euclidean distance is introduced to define the discrimination power between two vectors and the deviation degree for measuring the difference between the individual preference and group ideal preference. Based on above definitions, an alternative method is presented to compare multiple optimal solutions, and further, a universal weighted cross-efficiency model considering both dynamic adjustment of weights and preference formulation is constructed for evaluation and ranking. Two numerical examples are given to illustrate the effectiveness of the comparison method for multiple optimal solutions and weights determination method of DMUs, respectively. At last, a practical application aimed at evaluating environmental treatment efficiency in western area of China is given. Comparative analysis shows that our model could be more moderate, flexible, and general than some available models and methods, which can extend the theoretical research of cross-efficiency evaluation.

1. Introduction

Data envelopment analysis (DEA) is a common tool for measuring the relative efficiency of decision making units (DMUs). In classic DEA model based on “efficient frontier” [1], more than one DMU may be efficient with the same efficiency score one, which makes multiple efficient DMUs indistinguishable and nonranking. To overcome this weakness, Sexton, Silkman, and Hogan [2] suggested cross-efficiency evaluation method for distinguishing multiple DMUs. The essence of cross-efficiency evaluation is to assess each DMU by both self-evaluation of the DMU and peer-evaluations from other DMUs. Self-evaluation means a DMU can choose the best input and output weights to calculate its own efficiency, while peer-evaluation means a DMU gets its efficiency by using input and output weights other DMUs choose.

However, traditional cross-efficiency evaluation may be uncertain due to non-uniqueness of the optimal solution of linear programming (LP). Therefore, Doyle and Green [3] all suggested the secondary goal to guarantee the peer-efficiencies from benevolent or aggressive viewpoint. The benevolent formulation is to guarantee the sum of cross-efficiency scores of other evaluated DMUs as large as possible, while the aggressive formulation is to guarantee the sum of cross-efficiency scores of other evaluated DMUs as small as possible. Doyle and Green [4] further developed the secondary goal for guaranteeing each of other DMUs’ cross-efficiency score as small as possible or as large as possible. However, absolute aggressive or benevolent formulation may result of extreme efficiency. Liang et al. [5] proposed three cross-efficiency models based on the second objective, and the three methods all strengthens the stability of the cross-efficiency method, but some strategy adopted may be too extreme. In order to avoid extreme results, some neutral strategies are considered in literatures. Wang and Chin [6] proposed a “neutral” DEA model and the choice of weights depended on maximizing the relative contribution of outputs using a max-min formulation which could effectively reduce the number of zero weights for outputs. Wang et al. [7] introduced two virtual DMUs, the ideal unit and the antiideal unit, and adopted a neutral idea to construct four DEA models to solve the optimal weights of each DMU. An improvement produced by Ramon et al. [8] sought to prevent unrealistic input and output weighting schemes instead of eliminating their impacts. Carrillo and Jorge [9] continued pursuing the neutral strategy by which each DMU got its optimal input and output weights without concerning its effect on peer-evaluations. Along this line, Shi et al. [10] provided a new neutral cross-efficiency evaluation method that regarded the ideal virtual frontier and anti-ideal virtual Frontier as evaluation criteria.

As a matter of fact, there exists a dilemma that it is difficult to get exact benevolent, neutral, or aggressive attitudes from decision makers. Even if the exact attitude is specified, “sometimes the weight sets induced by the aggressive or benevolent formulation are still non-unique” mentioned by Yang et al. [11]. Therefore, Yang et al. [11] constructed cross-efficiency interval based on secondary goal, and the cross-efficiency scores from benevolent and aggressive formulation are the upper and lower limit points, respectively, in interval. A new extension-dependent function for three-parameter interval numbers was proposed to perform uncertain analysis of decision data [12]. Obviously, a three-parameter interval numbers can be used to represent cross-efficiency scores from benevolent, neutral, or aggressive attitudes of decision-makers. Of course, how to rank DMUs with cross-efficiencies denoted by interval numbers is focused on by many researchers. In Yang’s paper [11], the interval cross-efficiency matrix could be ranked according to the acceptability indices computed by the stochastic multicriteria acceptability analysis method (SMAA-2). Ramon et al. [13] developed a couple of models that allow for all possible weights of DMUs simultaneously to yield individual lower and upper bounds of the cross-efficiency, so cross-efficiency interval numbers are formed.

Information aggregation of peer-efficiency scores with weights has received growing interests in cross-efficiency research. The traditional cross-efficiency model integrated self-efficiency and peer-efficiencies by simple arithmetic or geometric averages, which may ignore differences of decision power and importance degrees of DMUs. Many theories and tools of information aggregation are applied to implement aggregation of cross-efficiencies. Shannon entropy was early used in cross-efficiency evaluation model [14], and further researches and improvements about Shannon entropy were provided by Wu et al. [15] and Song and Liu [16]. The Shapley value in cooperative games also became a subsidiary tool for determining weights [17, 18]. Oukil and Amin [19] presented mini-max disparity model to determine the aggregation weights in cross-efficiency evaluation. The ordered weighted averaging operator (OWA), as a common method for information aggregation, was also introduced to reasonably allocate weights between self-evaluation and peer-evaluation efficiencies in terms of the optimism level of decision makers [20]. Oukil [21] further developed the OWA operator and presented two ordered weighted averaging- (OWA-) based procedures to meet effectively the requirements of an information aggregation while exploiting the positive properties of the preference-ranking approach. Based on ideas of multicriteria decision-making theory, the relative importance of the cross-efficiency scores was estimated to derive aggregation weights [22]. Particularly, Carrillo and Jorge [22] mentioned in their article that two aspects, an intrinsic component reflecting discrimination powers of DMUs appraisals and a contextual component reflecting relevant relationship of DMUs appraisals, should be considered simultaneously for weights determination. Recently, Bargaining theory and the Kalai-Smorodinsky solution are also utilized by Contreras [23] to discriminate between optimal weighting profiles, and in their approach, the input and output multipliers are agreed upon by the peer DMUs.

More methods are also applied to deduce weights of DMUs and implement information aggregation, such as evidential-reasoning approach [24, 25], satisfaction degree [26], game theory [27], prospect theory with different risk preferences [28, 29], and group evaluation [30].

In the conventional cross-efficiency model and its many extensions, the self-evaluation score of a DMU is calculated by primary goal programming which may lead to some extreme input and output weights due to maximizing the self-efficiency, and peer-evaluation scores are usually got by secondary goal. An improvement from Jahanshahloo et al. [14] defined a new secondary goal to identify optimal input and output weights with promoting symmetry because excessive weight flexibility in primary goal programming for seeking maximum efficiency might lead to one or more variables ignored. Lin et al. [31] proposed an iterative method to get weights, which not only ensured unique input and output weights but also minimized the number of zero weights without any prior weight restriction. Noguchia et al. [32] proposed a new ordering to solve the weights of ranks by considering feasible solutions’ region of the constraint set in LP. Hong and Jeong [33] developed cross-efficiency heuristics without any LP model as an alternative for cross-efficiency ranking methods and their variations and furthermore constructed a systematic consistency evaluation framework to compare consistency level of any DEA-related full ranking method. Another contribution for avoiding inconsistent and unbalanced evaluation standard is from Li et al. [34] who suggested a common evaluation standard for all DMUs and a game-like iterative procedure to obtain the optimal balanced cross-efficiency.

In this article, standardized Euclidean distance is introduced to extend the traditional cross-efficiency model. On one hand, standardized Euclidean distance is used to select a set of input and output weights from multiple optimal solutions of linear programming which is what we need. More importantly, the Euclidean distance is also used to define the deviation degree and measure the difference between the individual preference and group ideal preference, which is the base for cross-efficiency aggregation with weights of DMUs.

The remainder of this paper is organized as follows. Section 2 introduces traditional cross-efficiency evaluation model and the Euclidean distance. Section 3 presents an alternative approach based on standardized Euclidean distance to select a set of input and output weights from multiple optimal solutions. Subsequently a new cross-efficiency model with weights of DMUs is constructed to get integrated efficiency scores of DMUs for evaluation and ranking in Section 4. In Section 5, a practical application involving real data aimed at evaluating efficiencies of environmental treatment in western area of China is given to illustrate the effectiveness of the proposed method. Finally, some conclusions and remarks are provided in Section 6.

2. Preliminary

2.1. Traditional Cross-Efficiency Model

Suppose there are n DMUs. Let be the input vector of and be the output vector of . The traditional cross-efficiency model [2] is described as Model I. Model I (Traditional cross-efficiency model).(i)Step 1: create primary goal (CCR model) to calculate the self-evaluation score of . The linear programming is(ii)Let be a optimal solution of equation (1), where is the optimal output weights and is the optimal input weights, so the optimal value is the self-evaluation score of .(iii)Step 2: calculate peer-evaluation score which represents the peer-efficiency of using the optimal input and output weights that has chosen:(iv)Step 3: generate the cross-efficiency matrix composed of self-evaluation scores and peer-evaluation scores:(v)where diagonal and off diagonal cells are, respectively, composed of the self-evaluation and cross-evaluations. represents the self-evaluation efficiency and the peer-evaluation efficiency.(vi)Step 4: calculate the final efficiency of , we have

Obviously, the final efficiency is mean of column j in the cross-efficiency matrix .

In Model І, averaged final efficiency suffers several significant drawbacks summarized by Song and Liu [16]. Additionally, if there are infinite solutions of LP, the non-uniqueness and randomness of optimal solutions caused by LP solver make evaluation results uncertain. So, we intend to introduce the Euclidean distance as an alternative tool to extend the traditional cross-efficiency model.

2.2. Euclidean Distance Metric

As a common distance measure, the Euclidean distance is defined as the square root of the sum of the squares of the differences between the corresponding dimensions of two points in the Euclidean n-space.

Definition 1. Let and be two points in the Euclidean n-space; the Euclidean distance between A and B is defined asIt is well-known that is a metric, that is,(1) (positive definite)(2) (reflexivity)(3) (symmetric)(4)for any A, B, and C and , (triangle inequality)Standardized Euclidean distance is to fully balance out different contributions of components of a vector.

Definition 2. Let and be two points in Euclidean n-space; standardized Euclidean distance is defined aswhere is the standard deviation of the dimension k.
In DEA cross-efficiency evaluation, the set composed of cross-efficiency scores of DMUs can be viewed as a point of Euclidean space, so Euclidean distance will be introduced to implement DEA cross-efficiency aggregation.

3. The Comparison of Multiple Optimal Solutions

3.1. Discrimination Power and Comparison Method

Due to the non-uniqueness and randomness of optimal solutions caused by LP solver, many extensions with secondary goal have been provided for dealing with the non-uniqueness of optimal solution that the LP solver generates in primary goal. However, as mentioned in the article from Lin et al. [31], there have been no theoretical proofs that the secondary goal programming always generate a unique set of input and output weights. Here, an alternative approach is suggested to compare multiple optimal solutions we can find if necessary.

In this section, Euclidean distance is supposed to measure the distance between the peer-evaluation vector and the self-evaluation. The peer-evaluation vector is composed of a DMU’s self-evaluation score and its peer-evaluation scores to other DMUs based on a specified optimal solution, and the self-evaluation vector is composed of self-evaluation scores of all DMUs. In order to describe our approach, the discrimination power of an optimal solution is firstly defined based on known standardized Euclidean distance.

Definition 3. Let be self-evaluation vector in which represents the self-evaluation efficiency of , and be the peer-evaluation vector of , where represents the peer-evaluation efficiency got by the optimal solution of . The discrimination power of is defined asObviously, . The bigger is, the greater the distance between the self-evaluation vector and the peer-evaluation vector of is. Decision makers can select the suitable optimal solution according to their benevolent or aggressive attitudes. From a benevolent viewpoint, the optimal solution with minimal will be selected, while the optimal solution with maximal is the target from the aggressive viewpoint.
Based on the discrimination power defined as Definition 3, Model II is given to compare optimal solutions for any .

3.1.1. Model II

(i)Step 1: calculate self-evaluation scores of all DMUs to create the self-evaluation vector .(ii)Step 2: for , suppose there are optimal solutions of the primary programming. For each optimal solution , construct the peer-evaluation vector . Obviously, is composed of peer-evaluation scores computed by .(iii)Step 3: for each peer-evaluation vector , calculate the discrimination power using equation (7).(iv)Step 4: compare optimal solutions based on their discrimination power and select a suitable set of input and out weights. If strong discrimination power is need, so the optimal solution with the maximal discrimination power is what we need. Conversely, the optimal solution with the minimal discrimination power could be selected.

3.2. A Numerical Example

Data from Tone [35] is used to illustrate the reasonableness of Model II for selection a suitable optimal solution. Consider Table 1.

First of all, we can get the self-evaluation vector through primary goal programming which means all self-evaluation efficiencies of DMUs are 1. In this numerical example, we only take as a research object. For other DMUs, the selection steps are the same.

Suppose we have found two optimal solutions and by a LP solver. In fact, we know from the theory of linear programming there are unlimited optimal solutions in this example. For each optimal solution , compute the peer-evaluation vector . The related optimal solutions and cross-efficiency scores are shown in Tables 2 and 3, respectively.

From Table 3, we know cross-efficiency scores which are given by to other DMUs are different under different optimal solutions. We have  = (1, 0.9959, 0.7998, 0.6588, 0.9694, 0.972) and  = (1, 1, 0.7961, 0.6649, 1, 1).

Next, we calculate the discrimination power according to Definition 3. In order to avoid the standard deviation with zero, replace with where is the self-efficiency of and is enough small. Here, let , so we have  = 4.2424,  = 3.

Obviously, the optimal solution has bigger discrimination power than . Therefore, we can select as a set of input and output weights to compute other DMUs cross-efficiency scores from aggressive viewpoint with strong discrimination power. Of course, Model II only provides a selective idea for selection from multiple optimal solutions.

4. Cross-Efficiency Model considering Preference Distances

4.1. Dynamic Weights of DMUs Based on Deviation Degrees

When all DMUs are regarded as a group of decision makers, the cross-efficiency evaluation is actually viewed as group evaluation. So how to aggregate different opinions which are showed by self-evaluation scores and peer-evaluation scores should be emphasized.

The standardized Euclidean distance between the individual preference of a DMU and the group ideal preference can denote deviation degree of the individual preference from the group preference, which can be used to measure the weight of a DMU. Furthermore, to consider the different impacts of self-efficiency and peer-efficiencies on final efficiency scores, the adjustment coefficient of weights is given to reflect the flexibility and universality of the model.

Definition 4. The deviation degree of individual preference from group ideal preference is defined aswhere represents the individual preference of composed of the self-evaluation score of and peer-evaluation scores of calculated using the optimal input and output weights of . Obviously, . The smaller is, the individual preference of is closer to the group ideal preference, and the DMU can get a bigger weight.
Next, the group ideal preference will be discussed. Based on the benevolent, neutral, and aggressive formulation, we can get different group ideal preferences.(1)From the benevolent formulation, the group ideal preference is , where is the maximal cross-efficiency score of column i in the cross-efficiency matrix(2)From the neutral formulation, the group ideal preference is , where is the mean of cross-efficiency scores of column i in the cross-efficiency matrix(3)From the aggressive formulation, the group ideal preference is , where is the minimal cross-efficiency score of column i in the cross-efficiency matrix

Definition 5. The adjustment weight of is defined bywhere and are the maximal value and the minimal value in , respectively. Obviously, , , and different formulations (benevolent, neutral and aggressive) lead to different group ideal preference and different weights adjustment.

Definition 6. The weight of is defined aswhere is the adjustment coefficient of weights and is the initial weight of for keep the initial difference of weights among DMUs. If a = 0, , that is, the weight of can be directly specified by initial weights. If a = 1, which means the weight of is fully determined by the adjustment weight. If , the weight is a compromise between the initial weight and the adjustment weight.

4.2. Model Construction

Based on classic cross-efficiency model and dynamic weights, a novel extension model described by Model III is presented to implement cross-efficiency evaluation and ranking.

4.2.1. Model III

(i)Step 1: create primary goal to calculate the self-evaluation score of by equation (1)(ii)Step 2: select the optimal solution from multiple optimal solutions for each through Model II if necessary(iii)Step 3: for each , calculate cross-efficiency scores , , (iv)Step 4: generate the cross-efficiency matrix based on obtained self-evaluation scores and peer-evaluation scores(v)Step 5: calculate weights according to equations (8)–(10) in specified preference formulation(vi)Step 6: aggregate efficiency scores in bywhere is the final efficiency of .

4.3. A Simple Numerical Example for Comparative Analysis

In Model III, weights of DMUs are the important segment described by Step 5, so a simple numerical example from Song and Liu [16] is used to illustrate the effectiveness and reasonableness of weights of DMUs in our Model III. Consider the following cross-efficiency matrix with four DMUs:

Table 4 lists the final weights derived from Wu’s method [36], Song’s method [16], and our method for weight determination E-method for short in Table 1 with a = 1, and Figure 1 shows directly the weights difference under different methods.

In Figure 1, AF, NF, and BF denote aggressive formulation, neutral formulation, and benevolent formulation, respectively. Obviously, the weights from Wu’s and Song’s methods have significant difference. In Song’s method, the weight of is 0.000309, while the weight of is 0.999980 in Wu’s method. It is observed that the ranking of weights from Wu’s method is , while the ranking from Song’s method is . Specially, some weights from Wu’s and Song’s methods are relatively extreme, which means some cross-efficiency scores will be neglected or exaggerated during the process of information aggregation. For example, the weight of is 0.000001 which means the self-evaluation scores of rarely impact on the final efficiency of itself. Obviously, extreme weights may not be good choices if we focus on each efficiency score of cross-efficiency matrix.

Compared to the results from Wu’s method and Song’s method, the weights from our method are more neutral no matter which formulation is used, which not only reflects differences among DMUs but also do not ignore any efficiency scores.

5. Application in Environmental Treatment Efficiency

As an effective evaluation and ranking method, the cross-efficiency evaluation not only makes the DMUs with same or similar efficiencies more distinguishable but also adequately considers the interrelationship among all DMUs. The cross-efficiency evaluation and its increasingly extensions have been extensively applied in various fields, such as performance assessment of tourism industry with group perspective [37] and cooperative partners selection [38]. This section will apply Model III to measure and rank environmental treatment efficiency of western area in China in 2015.

5.1. Evaluation Indicators and Data

The indicators and detailed data for cross-efficiency evaluation are listed in Tables 5 and 6, respectively. Data for empirical study come from 2016 China Statistical Yearbook on Environment, 2016 China Environment Yearbook, and 2016 China Statistical Yearbook, and 10 main regions in western area of China are selected as DMUs. The districts are Chongqing, Sichuan, Guizhou, Yunnan, Guangxi, Shanxi, Gansu, Qinghai, Ningxia, and Xinjiang labeled from to , respectively.

5.2. Data Processing and Analysis

The algorithm of Model III is implemented by Matlab, and data corresponding to negative indicators y4 and y5 are positively handled through reciprocal values. According to Model III, we firstly calculate self-evaluation efficiencies of 10 DMUs according to equation (1), and then calculate peer-efficiencies according to equation (2). Here, we select directly the optimal solution of the primary goal programming solved by Matlab. Table 7 shows the cross-efficiency matrix.

In Table 7, diagonal cells represent self-evaluation scores, and off diagonal cells represent peer-evaluation scores. According to self-evaluation efficiency scores, we know that five DMUs are DEA-efficient with 1, one DMU is closely DEA-efficient with 0.9794, and has a minimal self-efficiency which indicates the overall performance of environmental treatment in Xinjiang region is poor. For comparison and ranking, peer-evaluation efficiency scores need to be considered.

Next, we consider information aggregation of cross-efficiency scores with weights of DMUs. For each DMU, the deviation degree, the weight adjustment, and the final weight are calculated according to equations (8)–(10). Finally, equation (11) is used to get final efficiency scores for comparing and ranking. In this empirical study, we only consider benevolent formulation and let a = 1 in equation (10). The results with benevolent formulation are listed in Table 8.

From Table 8, we havewhich means the has the maximal cross-efficiency evaluation score, while the efficiency score of is minimal. The results of self-evaluation listed in Table 3 show that there are several DMUs are all efficient or closely efficient, but these DMUs cannot be efficiently distinguished. Compared to self-evaluation, cross-efficiency evaluation makes all DMUs comparative.

5.3. Comparative Analysis with Doyle’s Method

Comparing our method described by Model III with Doyle’s method [3] and traditional method of averaged cross-efficiency described by Model І, results are listed in Table 9. Note that benevolent formulation is applied in both our method and Doyle’s method for consistency and comparability. For neutral and aggressive formulation, the comparative analysis is similar. Figure 2 shows directly the difference of final efficiency scores under different aggregation mechanism.

As is shown in Table 9 and Figure 2, however, all ranking results from three methods are similar, and efficiency scores from our model are moderate. They are higher than averaged-efficiencies from Model І and lower than relatively extreme peer-evaluation scores from Doyle’s method. Furthermore, in Doyle’s method, sets of input and output weights for different cross-efficiency scores are all different because of primary goal programming and secondary goal programming, while in Model І, a DMU can use the same input and output weights to do self-evaluation and peer-evaluation. Our compromising method not only keeps the consistency of evaluation criteria, which means each DMU uses the same input and output weights to assess itself and other DMUs but also adequately considers weights of DMUs under different preference formulation. Furthermore, the coefficient a of weight adjustment makes our model more flexible than other methods.

6. Conclusions

In current research, information aggregation of cross-efficiency has been a hot topic for DEA evaluation. As a common distance metric, the Euclidean distance can be considered to extend DEA cross-efficiency aggregation. In this article, standardized Euclidean distance is introduced not only to compare different optimal solutions for selecting a suitable set of input and output weights but also to determine weights of DMUs for cross-efficiency aggregation based on the difference between the individual preference and group ideal preference. For illustrating the effectiveness and reasonableness of weights from our model, a numerical example in Section 4.3 is provided. The results show that comparing to Wu’s method [36] and Song’s method [16], weights from our method not only reflect differences among DMUs but also do not ignore any efficiency score. Furthermore, an empirical application for DEA cross-efficiency aggregation and evaluation is given. The traditional DEA cross-efficiency method, Doyle’s method [4], and our method mention in this paper are compared. Comparative analysis and results show that our cross-efficiency model with weights could be more moderate, flexible, and general, which can extend the theoretical research of DEA evaluation.

Although we numerically demonstrate the performance of our method, more extensive studies in our future work are needed to be proceeded, such as undesirable outputs and deviation degree in different preference formulations. In addition, the selection of adjustment coefficient of weights should also be considered. We claim that our research is a starting point for those future studies.

Data Availability

The data used to support the findings of this study are available from the corresponding author upon request.

Conflicts of Interest

The authors declare that they have no conflicts of interest.

Acknowledgments

The research was supported by the National Social Science Foundation of China (18BJY093), the National Natural Science Foundation of China (71901044), Humanities and Social Sciences Foundation of Ministry of Education of China (19XJC630011, 18YJC630009), and the Humanities and Social Science Foundation of Chongqing Municipal Education Commission of China (18SKGH031, 19SKGH043).