Table of Contents Author Guidelines Submit a Manuscript
Mathematical Problems in Engineering
Volume 2018, Article ID 2108726, 14 pages
https://doi.org/10.1155/2018/2108726
Research Article

A Multiple Criteria Decision Analysis Method for Alternative Assessment Results Obeying a Particular Distribution and Application

1Jinling Institute of Technology, Nanjing, Jiangsu 211169, China
2College of Economics and Management, Nanjing University of Aeronautics and Astronautics, Nanjing, Jiangsu 211106, China

Correspondence should be addressed to Wang He-Hua; nc.ude.tij@hhw

Received 6 October 2017; Revised 8 March 2018; Accepted 26 March 2018; Published 3 May 2018

Academic Editor: Josefa Mula

Copyright © 2018 Wang He-Hua et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Abstract

A multiple criteria decision analysis (MCDA) problem is studied in this paper, for which the evaluation results obey a particular distribution. First, when solving a multiple criteria decision analysis (MCDA) problem, a grey target decision analysis framework is proposed to determine uncertain parameters and criteria weights. A measurement for comprehensive off-target distance is defined, which includes the undetermined parameters. Second, to satisfy the requirements of a specific distribution (such as a normal distribution) in the assessment results, an optimization model that incorporates the off-target distance constraints is proposed by considering the skewness and kurtosis test method. Third, a particle swarm optimization (PSO) algorithm is extended to solve the proposed model by seeking the appropriate parameters and weights. Fourth, a numerical example is applied to demonstrate the feasibility and application of the proposed method. In the end, the proposed model is extended to other distribution requirements.

1. Introduction

Over the past few decades, multiple criteria decision analysis (MCDA) tool has been adopted to enable a decision-maker (DM) to make a decision from a finite set of alternatives with respect to multiple criteria [1]. MCDA is defined by a set of alternatives (denoted by ), from which a DM can select the optimal alternative according to the identified set of criteria (denoted by ). The evaluation value of the alternative with respect to the criterion given by the DMs is represented as . The key tasks of the MCDA methods are to study the weight of each criterion, integrate the multiple criteria, and aggregate the information from different DMs. On that basis, many methods are developed to extend the possible applications for MCDA.

In the process of the MCDA, decision-makers prefer that the alternatives obey a certain distribution when faced with more decisions, such as the normal distribution. On one hand, if the alternative is evaluated to obey the normal distribution, the corresponding evaluation results are reasonable. On the other hand, when the attribute weight and the decision rules are not very clear, a normal distribution is often also shown for the alternatives. Based on the framework of MCDA, the comprehensive value can be obtained by aggregating the values and weights of criteria. In most practical cases, the MCDA problems often involve multiple dimensions, in which the consideration of criteria interactions is essential. In this paper, we consider a new situation when the assessment result obeys a normal distribution with criteria independence. Under such conditions, the comprehensive value still obeys the normal distribution, which is related to the weights of criteria. Then the multidimensional problem can be reduced to a weighting problem. Thus, we mainly study the issue of how to derive the weights of the criteria and make a decision under a normal distribution. As many decision-making problems pertain to the MCDA category, we propose a method for deriving the weights of the criteria as a decision-making parameter. Considering the simplicity of the grey target decision among the MCDA, this paper proposes some parameters for the normally designed problems based on the grey target decision analysis framework. Our main contribution is to propose an analysis framework for the MCDA problem with a normal distribution being included based on the grey target decision method and then design an algorithm for the model. First, we propose an improved grey target decision method to optimize the target-setting and criterion weights. Second, we establish an optimization model to test the normal distribution of criteria based on the skewness and kurtosis. Third, we develop a particle swarm optimization (PSO) algorithm to solve the model. According to the results of the model, we can verify the rationality of the target center, obtain the criteria weights, and select the optimal alternative. Furthermore, the analysis method is applied in a case and extended to two other cases.

The structure of this paper is presented as follows. Section 2 reviews the related literature. In Section 3, the grey target decision method is described. In Section 4, we describe the random variable settings of the target, the measurement of the combined off-target distance, and the constraints of normal distribution assessment. Then, we propose a model to determine the optimal target and criteria weights. In addition, an intelligent optimization algorithm is developed to solve this model. In Section 5, we apply the proposed methodology by using real data, and some extensions are suggested in Section 6. Finally, conclusions in the paper are given in Section 7.

2. Literature Review

A number of well-known MCDA methods have been studied in recent years [1], such as the multiattribute utility theory (MAUT), the analytic hierarchy process (AHP), linear programming technique for multidimensional analysis of preference (LINMAP), and the elimination and choice translating reality (ELECTRE) method. The MAUT is an effective tool for decision analysis and quantitative analysis of decision-making problems, which is carried out by utility analysis [1, 2]. The AHP, introduced by Saaty [3], is an effective tool for dealing with complex decision-making, helping decision-makers set priorities and select the best alternative. The LINMAP aims to obtain the best alternative, which is closest to the ideal alternative through pairwise comparisons of alternatives [4]. The ELECTRE method is modeled by using binary outranking relations between two alternatives, which considers four situations [5]. These methods are widely used in MCDA problems as the basic analysis tools. Next, we review the literatures according to the steps of MCDA. First of all, the multiple preferences are obtained in many studies because preference structuring and modeling are greatly important. The ordinal information was studied and preference programming was suggested in [6]. The pairwise judgment in the AHP is widely applied as a preference style [3]. To solve the uncertain decision-making problems, linguistic preference [7], interval numbers [8], and triangular fuzzy matrix [9] were developed in many studies. From these literatures, we can see that various preference styles are used to express the DM’s judgments. The combination use of different uncertain styles will be a research trend for MCDA problems. Second, many aggregation methods of MCDA are developed to aggregate the information effectively, such as the utility-based individual preference [2], the nonlinear information aggregation method [10], the prioritized weighted aggregation [11], the ordered weighted averaging (OWA) operator and the families of OWA [12], the Choquet-Mahalanobis operator [13], and the methods of determining OWA weights [14]. The optimization model and aggregation operator are the main methods. In addition, the applications of MCDA are much increasingly common in practice. For example, the supplier selection was studied with the fuzzy technique for order of preference by similarity to an ideal solution (TOPSIS) in [15]; the global e-government was evaluated in [16]; the energy policy support was researched in [17].

Uncertain situations often arise when DMs make a decision. Thus, many methods were developed to solve the uncertainty. Podinovski examined the ranking order of decision alternatives under uncertainty with unknown utility functions and rank-ordered probabilities [18]. Jiménez et al. introduced a dominance intensity measuring method to derive a ranking order of alternatives on the basis of the multiattribute utility theory (MAUT) and fuzzy sets theory [19]. The interval AHP was applied to solve the complex case [20]. Zhao et al. aggregated the hesitant triangular fuzzy information method based on Einstein operations and applied it to multiple criteria decision-making [21]. Lee and Chen [22] proposed a new fuzzy decision-making method based on the proposed likelihood-based comparison relations of the hesitant fuzzy linguistic term sets. Due to the complexity and uncertainty in decision-making problems and the inherent subjective characteristics of decision-makers’ judgments, the descriptions of criteria and alternatives involve uncertainty [23]. In the problems with criteria interactions, classification methods are studied [13] for some particular distributions. The problem can be deduced as a particular MCDA case with uncertain parameters, which should be analyzed from the perspective of the modeling or optimization of some related parameters. In most methodologies, it is assumed that each of the criteria is independent of the others. In this paper, we also focus on the discussion of the MCDA method with the assumption of independent criteria. But, in recent years, many methods focus on the criteria interactions. Different from the methods of independent criteria, the method of criteria interaction also involves the appropriate selection of aggregation function and distance. Fuzzy measures can be used to express redundancy, complementariness, and interactions among criteria [24, 25]. Choquet integral [26] and Sugeno integral [27] provide two main aggregation functions to deal with criteria interactivity. Distance functions based on fuzzy integral are also studied [25]. The Mahalanobis distance and the Euclidean distance are used to measure the distance between objects [28]. Mahalanobis distance [13] uses the inverse of the variance-covariance matrix to measure the covariance distance of data, which is useful in problems considering the dependence between criteria. The Euclidean distance is only applied to deal with independent variables. But the Euclidean distance is simple to compute and interpret. Under the assumption of independent criteria, the research of criteria interactions is an interesting work to be studied in the future. In this paper, we focus on the discussion of the MCDA method for which the assessment results of alternatives obey a particular distribution under the assumption of independent criteria. The relevant research involves the determination of criteria weights, the selection of decision model, and the decision parameter settings, as well as the information reasoning and assembling. For example, criterion weight was determined based on the feedback model in multiple criteria group decision analysis problems with requirements for group consensus in an evidential reasoning context [29]. Ahn [30] presented a simple method for finding the extreme points of various types of incomplete criteria weights. Chun [31] solved the multicriteria decision problem with sequentially presented decision alternatives based on the assumption that the decision-maker has a major criterion that must be “optimized” and minor criteria that must be “satisfied.” Yang and Wang proposed a linguistic decision aiding technique based on incomplete preference information [32]. However, few studies focused on the problems in which alternatives obey a specific distribution.

The grey system theory is also widely used in MCDA, which usually deals with the uncertain problems. Since Deng first introduced the grey system, its theory and application have been developed rapidly in recent years [33]. The grey target decision method is an important application in the grey system theory in decision-making problems. The method refers to the targeted value setting in the satisfactory effect region, which is currently used in project evaluation, supplier selection, production efficiency evaluation, and other fields [34]. Chen et al. [35] presented an approach for monitoring equipment conditions based on the grey target decision. The measurement for target distance and the weight optimization model were also studied for adaptation to the three-parameter interval grey number [36]. Considering the multiple stages linguistic label, the grey target decision method is proposed to measure the target distance [37]. When the value distribution of information is asymmetrical, a multiscale extended grey target decision method is proposed to deal with the problems with interval grey numbers [38].

Information is often imprecise in most complex decision-making problems, which cannot possibly be predicted with certainty on the alternative performances. From the above review, the existing research of MCDA focuses on fuzzy and uncertain information processing, the weight setting method and optimization model, and the preference data aggregation methods. However, most of MCDA researches rarely discuss the multiple criteria decision analysis problem of alternative information obeying a specific distribution. Therefore, it is necessary to study such decision-making problems when the decision-making object is subject to a distribution.

3. Grey Target Decision Analysis Theory

The grey systems theory was developed by Deng to study the problems involving “small samples and poor information” [33]. Its research objects can be classified into the black ones, the white ones, and the grey ones, according to certain cognitive hierarchy. In this theory, a system is usually defined as a black box if its internal structures and features are completely unknown, whereas white box indicates that the internal features of the system have been fully explored. If the internal features of a system are partly known and partly unknown, it is known as a grey system. The differences between fuzzy and grey methods are summarized in a previous study [34]. Unlike fuzzy mathematics, the grey systems theory focuses on research objects that have a clear extension and unclear intension, whereas fuzzy mathematics has its strength in the study of problems with “cognitive uncertainties.” All objects of fuzzy mathematics have the characteristic of “having a clear interior extent without a clear exterior extent.”

In addition to controlling the intrinsic social, economic, agricultural, and ecological characteristics, the main research tasks of grey system theory are forecasting and decision-making. The nonuniqueness principle is an important fundamental principle of grey theory. This implies that the solution to any problem with incomplete and nondeterministic information is not unique. Strategically, the principle of nonuniqueness is realized through the concept of grey targets. The idea and analysis of the grey target decision are presented in Figure 1. When the optimal or satisfactory goal is set to be the grey target, the area between the grey target and the alternative is essentially an optimal or satisfactory area, which is not a strict optimal result. In practical applications, the grey target decision method is flexible. In many cases, it is impossible to achieve absolute optimality, so it is often desirable to have a satisfactory outcome. In the actual application process, the grey target can gradually shrink and reduce to a point.

Figure 1: An example of grey target decision effects.

The following three definitions were all derived according to a previous reference [34].

Definition 1 (see [34]). Let and , and , and be the upper and lower threshold values for the situation effects with objects . Then the grey target of s-dimensional decision-making is defined as .

Definition 2 (see [34]). Let represent a center; then the expression is defined as the s-dimensional spherical grey target with the center and radius R, where is considered as the optimum effect vector.

Based on the definitions of both Euclidean distance [28] and the grey target distance [34], the grey target distance is actually a Euclidean distance. It is more suitable for the assumption of independent criteria and is easy to compute. Therefore, we choose the grey target distance in this paper.

Definition 3 (see [34]). Let denote a vector. Then the off-target distance is defined aswhere the off-target distance reflects the superiority of effect vectors.

The grey target decision is introduced in detail in literature [34]. From the review of the grey target decision analysis method, there are three aspects involved in the existing studies in MCDA area. First, the method has been already applied in numerous real-world applications. Second, the grey target decision is extended to uncertain cases in which the information is expressed in terms of the interval numbers, fuzzy numbers, and other uncertain forms. Third, how to aggregate the multistage information is also an interesting topic. In some cases, the grey target decision is similar to the TOPSIS method. The grey target decision can well reflect the “nonuniqueness” principle of grey system theory. There are two connotations involved in the grey target decision. Firstly, the object of the grey target is restrained and nonunique. In the method, the upper and lower values of alternatives with respect to each of the criteria are defined to describe the decision-making information. It can be considered as a feasible region, where the relative satisfaction effect is achieved, and thus reflect the restraint of the target. Secondly, the solution of grey target has “nonunique” characteristic. When facing many possible alternatives, the grey target decision can combine the qualitative analysis, supplementary information, and other quantitative methods to determine one or more satisfactory solutions, which is helpful to make both qualitative analysis and quantitative calculation. The aforementioned two connotations reflect that we can achieve the target, select a better alternative, and optimize the route by using the grey target decision. In addition, the grey target decision may not be very practical when there are other distribution styles of alternatives, such as a normal distribution. Accordingly, different distance measurement should be considered according to the actual situation, such as the Mahalanobis distance [28].

The positive and negative targets of grey target decision are similar to the positive and negative ideal points in TOPSIS method. They are both the recognized optimal effect measures with decision makers. The differences between the two methods are described from two aspects. Firstly, based on the grey target decision framework, the positive and negative targets have the characteristics of the subjective and objective combination. The positive and negative ideal alternatives in TOPSIS focus on the calculation of pure proximity between alternatives, which have objective characteristics. Secondly, the set of positive and negative ideal alternatives is mostly based on the maximum/minimum values of all the evaluated alternatives with respect to some criteria. However, the positive and negative targets reflect the principle nonuniqueness of the grey target decision. In some cases, there may be other values besides the extreme values.

4. The Key Part of the Proposed Method

The paper mainly addresses the problem by using the established decision model to evaluate the alternatives that present a normal distribution when facing multiple alternatives. Based on the grey target decision framework, we are mainly interested in the off-target distances, which reflect the merits of the alternatives. This paper explores the following two aspects. Firstly, considering the characteristics of the existing grey target decision method, the framework of grey target decision, new setting methods of target centers, and off-target distances are proposed to make the evaluation results as regular as possible. Secondly, the off-target distance of the evaluated alternatives should meet the statistical and mathematical characteristics of a normal distribution. Thus, this paper proposes an improved grey target decision method for the target-setting optimization and evaluation criteria weight optimization. Furthermore, we establish an optimization model according to the skewness and kurtosis test of the normal distribution for criteria. The thought process is shown in Figure 2.

Figure 2: The main idea of the paper.
4.1. Random Variable of Target Center Settings

Through the idea of the grey system theory, the target center should be first selected. The target center is set to include both the best and worst alternatives. If an evaluation alternative is defined, the target center has a specific value.

For a beneficial-based criterion,

For a cost-based criterion,

This method of setting the target center is simple and easy to understand, while it has some shortcomings that are likely to cause the reverse to be true when adding a new alternative or deleting an existing alternative. On one hand, the target center is the current optimal (worst) criterion value. On the other hand, it is the potential optimal (worst) criterion value, which acts as a benchmark for improvement. Different settings of the target center may lead to different alternative rankings and, subsequently, a relatively clear target center is not easy to find because its setting is quite sensitive. In making practical decisions, it is easier to give the possible distribution range of the target center than a specific numerical value. In this paper, we first give the possible distribution range of the target center based on the prior information. Then, we choose the optimal target center according to the specific requirements of the decision. The specific steps are described as follows.

The positive and negative target center reference points can be determined by using (3) and (4).

The possible distribution range of the positive and negative target centers is determined according to the possible similar alternative sets and expected values.

An example of a positive target center is shown in Figure 3.

Figure 3: The idea of setting the positive target center.

Thus, for a beneficial-based criterion, we have

For a cost-based criterion, (6) must be satisfied:

An improved grey target-setting method is therefore proposed, which creates the foundation for the target optimization.

4.2. Measure of Comprehensive Off-Target Distance

In this section, we measure the off-target distance in accordance with the grey target decision theory. The distance to the positive target center of the th alternative can be expressed as follows:The negative distance to the th alternative can be expressed as

Considering the distance to the positive and negative target centers of alternatives, the comprehensive off-target distance can be expressed as follows:

In general, the alternatives can be ranked according to the comprehensive off-target distance, and a larger off-target distance indicates a better alternative. Due to the different dimensions of the criteria, the comprehensive off-target distance is subjected to distance normalization. The comparison is carried out between each criterion and its grey target. They have the same units. For the criterion j, the off-target distance of alternative is normalized by to realize that all distances belong to . This paper does not adopt a normalized decision matrix to deal with the unification of dimensions because the largest normalization value is 1 in this type of matrix, which makes the setting and understanding of the target center difficult. Taking the positive target center as an example, this value can be understood as the optimal value that we try to reach and provides the improvement target for other alternatives.

In the expression of the comprehensive off-target distance, the criterion weight is also a sensitive parameter. In this paper, we consider the criterion weight as a possible range, as it is much easier to give a range than a definite number. Thus, we have and .

4.3. Assessment Constraint of a Normal Distribution for Alternatives

When there are a large number of alternatives that need to be evaluated, it is necessary to verify the validity of the assessment rules. There are many common distribution functions in social evaluation problems, such as the uniform distribution, the beta distribution, and the Gauss distribution. The normal distribution has an overwhelming social significance. Usually, if the results of evaluated alternative follow a normal distribution, it is shown that the evaluation rules are objective, effective, and distinguished. The idea of the paper is adopted for MCDA under the requirement of the normal distribution. In practical applications, many kinds of distribution are often encountered. The distribution testing is the premise to solve the problem [26]. In this paper, the off-target distance of the evaluation alternative should be normally distributed. So, we should test whether the data satisfy the normal distribution. Many methods can be used to test for a normal distribution, including skewness and kurtosis. The skewness and kurtosis distribution of the sample converge to those of the overall distribution. If the hypothesis that the off-target distance is normally distributed is true, the skewness of the sample should be close to 0 and the kurtosis of the sample should be close to 3.

Suppose that we have samples from the whole proposition and test hypothesis . For a certain significance level , the rejection region isIf (8) is satisfied, the data are not normally distributed. If or , the data from are normally distributed, where is the number of samples, is the comprehensive off-target distance of alternative , and there arewhere and represent the skewness and kurtosis of the samples, respectively. If obeys a normal distribution and is sufficiently large, we can calculate that

Based on the above analysis, the off-target distance is a function of the positive and negative target centers and criteria weights. Thus, if the evaluated off-target distances are required to be normally distributed, we should have the following relationship:

Actually, we must seek a set of reasonable target centers and criteria weights to satisfy (11a). For the convenience of solving the problems, (11a) can be rewritten as

If , the condition of a normal distribution is satisfied. A smaller deviation means that the distribution is close to a normal distribution. Thus, the problem is to find a set of decision variables to satisfy (11b) and make close to 0. Considering that these deviation variables are not negative, the following conditions can be established:

When analyzing the existence of the normal distribution, there are certain requirements for the number of samples with the number of samples being generally larger than or equal to 30.

4.4. Optimal Target Center and Criterion Weight Determination Model

According to the above analysis, the mathematical model described by (13) can be established. Three aspects have been considered. Firstly, we solve the decision-making problems based on the grey target decision-making framework. Secondly, the results of the grey target decision should satisfy the normal conditions. Third, the weight and related parameters should satisfy the basic requirements. For example, the weight for the criterion should satisfy .

In this model, the first constraint characterizes the off-target distance of the grey target decision; the second one constrains the off-target distance to be normally distributed; and the third part concerns the target centers and criteria weights. From this, we can obtain the best positive and negative target centers and criterion evaluation weights. At this point, we can rank the alternatives according to the off-target distances.

From (13), there must be an optimal solution, because there must be a set of identified positive and negative target centers and criteria weights that satisfy the required constraints. The difference lies in the value of the deviation variables . Moreover, from the perspective of the model features, there is a squared term in the analysis of the off-target distance. Therefore, it is a nonlinear optimization model with the number of decision variables, which is determined by the number of positive and negative target centers and criteria weights. If there are criteria, we will have decision variables.

4.5. Algorithm Design for Solving the Model

In order to provide more flexible applications, we can use some existing optimization algorithms to solve the proposed nonlinear optimization model, which can be also embedded in the user systems. The classical metaheuristic algorithm includes simulated annealing (SA) [39], variable neighborhood search (VNS) [40], genetic algorithm (GA) [41], and particle swarm optimization (PSO) [4244]. PSO is a method-based search process. In the search process, each particle (the underlying solution to the optimization problem) changes constantly, until the state reaches an equilibrium or optimal state. The PSO is selected from several reasons. First, PSO can remember good solutions, while the previous knowledge of GA is changed with the population change. The PSO has no crossover and mutation operations relative to GA, while the particles are updated only through internal speed. Thus, PSO has a simpler principle and less parameters and thus is easier to be implemented. Second, the SA algorithm can solve the NP problem, which has a very wide application, but its parameters are difficult to control. Third, the basic idea of VNS is to obtain the local optimal solution by changing the neighborhood structure to expand the search scope. Based on the local optimal solution, the same method is used to search for another local optimal solution. Thus, the global search ability of VNS algorithm is worse than PSO. In summary, the PSO method shows superior features than other methods, such as computation simplicity, fast global optimization solution, and good efficiency. The basic flow of PSO algorithm is shown in Figure 4. It is a stochastic method designed for the nonlinear problems, which is more suitable for the problem in this paper. Therefore, we design an intelligent optimization algorithm with the aim of simulating the cooperation and competition behavior of biological groups to solve computing problems. Combined with the mathematical model described above, the detailed steps of our PSO algorithm are presented in Figure 4 [42].

Figure 4: The basic flow of particle swarm optimization (PSO) algorithm.

Based on the decision variables of the problem, the target center and criteria weights are essentially certain numbers. However, with the limits imposed by the interval distribution, the particle design should be conducted under the interval constraints.

(a) Initial Particle Design. Particles are initialized randomly, with the location of the th particle in the multidimensional solution space (expressed as ). The first components represent the positive and negative target centers of each criterion, while the following components represent the criteria weights (real numbers generated randomly in their respective limited range). These criteria weights should satisfy .

(b) Particle Speed. The speed of a particle cannot exceed the maximum possible speed. Higher speeds generally ensure good global search ability, whereas lower speeds permit accurate search optimization over a small range. The center dimension of the th particle and the optimization search speed of the criteria weights should not exceed the upper and lower limits of their ranges. The condition is expressed in terms of (5) and (6).

(c) Calculation of Fitness. Calculate the value of the deviation variable in (11a) and (11b).

(d) Update the Particle Swarm. Suppose that the optimal solution after several iterations is for an individual and for the group. Thus, particle updates its speed and location according towhere represents a random number between 0 and 1; , , and are random values regenerated for each velocity update, while represents an inertial coefficient, which can either dampen the particle’s inertia or accelerate the particle in the original direction. The value of is typically between 0.8 and 1.2. Thus, we set the value of to be 0.9. and represent learning factors, with usually being close to 2, which affects the size of the steps that the particle takes toward its individual best candidate solution. Furthermore, is typically close to 2 and represents the size of the steps that the particle takes toward the global best candidate solution. Thus, we set the value of and to be 2 [4244].

According to the above description, the method proposed in this paper can be summarized by the following four steps.

Step 1. Apply the framework of multicriteria decision-making to analyze the problem. In this step, we establish the alternative set and criteria set and collect the criteria data for alternatives.

Step 2. Represent the problem according to the grey target decision framework. In this step, we determine the general ranges of positive and negative target centers before investigating the ranges of the criteria weights. On this basis, we establish the distances between each alternative and its positive and negative target centers with respect to each criterion.

Step 3. According to the kurtosis and skewness test method of a normal distribution, we establish the optimization model for the target centers and criteria weights combined with the distance function in Step 2. Following this, we design the algorithm to solve the model.

Step 4. Check the rationality of the target centers and criteria weights according to the PSO results. We clarify the comprehensive distances of each alternative to its positive and negative target centers and rank the alternatives according to the off-target distance.

The method in this paper has the following characteristics. Firstly, an improved grey target decision framework is proposed with the flexible target center settings. Secondly, the analysis model under the constraint of a normal distribution is put forward, which is suitable for this problem. Thirdly, the analysis algorithm is proposed in the method.

5. Case Analysis

In this section, we consider the performance of important suppliers for aero-engines as an example to illustrate the analysis process. To evaluate the performance of important suppliers for aero-engines reasonably, we establish three evaluation criteria based on the extensive investigation. The characteristics of aero-engine development include the technical difficulty, the cost, and the long period of time. The aero-engine is crucial as it has a decisive influence on the performance of the aircraft and the successful aircraft development. The management of aero-engine suppliers has the following features. First, the integration of logistics, information flow, and capital flow should be emphasized. Second, the enterprises in the supply chain form a type of cooperative competition in which cooperating enterprises share the information, risk, and benefits and offer the technical support for the development of each other. Third, the sharing and integrated application of information technology are of great importance. There are three aspects (180 points in total) that affect the aero-engine supplier performance: the internal operation system (C1, 30 points), supply system (C2, 50 points), and quality system (C3, 100 points). The internal management system reflects the internal management level and development prospects of the suppliers, including the process planning of enterprise level and technical management of personnel level. The supply system reflects the efficiency and the economy of the supplier’s behaviors, including the acceptance rate of order change, on-time delivery rate, total supply satisfaction rate, price level, quotation timeliness, and price fluctuation. The quality of the system indicates the quality of the supplier. This includes product quality and service quality, such as average eligibility rate, quality accident, after-sale service, and communication as well as cooperation ability. It should be noted that all these criteria have been calculated based on a certain rule. Therefore, the evaluation results have integrated meanings. Thus, the performance of 44 suppliers can be given as shown in Table 1. In Table 1, the “Number” column represents the number of suppliers, while C1, C2, and C3 represent the criteria to be evaluated. The numbers in columns C1, C2, and C3 represent the performances of suppliers.

Table 1: Performances of 44 suppliers for aero-engines.

To enable the practical use of evaluation results, such as the eligibility assessments, we analyze the assessment results with the normal distribution constraints and compare the outcomes with the case of no constraints.

Step 1. We establish an evaluation criteria system and collect the basic data according to the multicriteria decision-making analysis framework.

Step 2. By analyzing the three evaluation criteria for the 44 suppliers, we obtain the statistical analysis results shown in Table 2. The maximum and minimum values of each criterion can be used as the basis for positive and negative target centers. According to the practical situation and the development of the college, we then obtain the possible distribution range of positive and negative target centers as well as weights for each criterion based on this investigation, as shown in Table 3.

Table 2: Mathematical criteria result for the alternative to be evaluated.
Table 3: Possible distribution range of positive and negative target centers as well as criteria weights.

Step 3. Build a model and design an algorithm.

Step 4. By optimizing the model (particle swarm size set to 20), we obtain the grey target decision for the off-target distance of the 44 suppliers (see Table 4), where the sample observations satisfy and . Therefore, we accept the null hypothesis that the evaluation results obey a normal distribution. Repeated calculations show minor differences and mostly converge within 10 iterations.

Table 4: Optimization values with normal distribution constraints.

We now compare these results with those given by different methods, which allows us to propose two different ideas for analysis.

Method 2. We refer to the classic grey target decision method directly, which assumes that the positive and negative target centers are the optimal value and the worst current alternative, respectively. In addition, the first two items of the criterion weight are the midpoint of the interval (0.3, 0.45, and 0.25). Thus, we can conclude that the normal distribution hypothesis is false, when and .

Method 3. We use a linear weight method of multicriteria decision-making to solve the problem. Following this, the comprehensive criterion value of alternative is and the weight is the midpoint of the interval. At the same time, we have and . The normal distribution hypothesis is false.

Method 4. We use the traditional TOPSIS method to calculate the results. According to the entropy weight method, the weights of criteria are , and . The values of alternatives are calculated according to the traditional TOPSIS method (see Table 5). Thus, we can conclude that the normal distribution hypothesis is false, when and .

Table 5: Comparison of four methods.

A comparison of the results from the four methods is shown in Table 5, where the method proposed in this paper is Method 1. In Table 5, the “Number” column represents the number of suppliers, while the columns of “Off-target distance” and the “Alternative value” represent the comprehensive performance values of suppliers. The “Rank” column shows the rankings of the comprehensive performance values.

We can rank the four columns of data based on the off-target distance and alternative values in Table 5 using histograms, as shown in Figure 5. The figure provides a clear representation of the differences in the normality of the results of the four methods.

Figure 5: The histogram and normal distribution map of different methods.

From Table 5, we can clearly see different alternative results that are obtained based on the method of analysis. Relatively speaking, the result of Method 1 is similar to Method 2 but has a large difference compared with Method 3 and Method 4. The top six results given by Method 1 and Method 2 are the same. The target center is a fixed value in Method 1, which does not change with the increase and decrease of the other alternatives. In contrast, Method 2 does not have this feature. From the histograms, we can see that the results of Method 1 and Method 2 are more concentrated than those of Method 3 and Method 4. From the histogram and normal distribution map of Methods 2–4, the mean of Method 2 is out of the center, while the data of Method 3 and Method 4 is relatively dispersed. If there is an error in the evaluation process, the undetermined parameters may go beyond the predefined range, resulting in no feasible solution or no optimal solution. Thus, the data calculated by Method 1 shows better normal distribution and relative stability compared to other methods.

6. Method Extensions

In this paper, we have studied a particular decision evaluation problem in which the evaluation results are defined under the normal distribution constraints. Here, we further analyze the applicable conditions.

Condition 1. Assessment results obey a strictly normal distribution according to the qualitative analysis, such as product quality data. This type of analysis can be obtained from the experience or theory. Usually, the evaluation results of alternatives will obey a normal distribution if the property of the object is controlled by many nonexclusive dominant position factors.

Condition 2. The criterion weight and evaluation parameter settings for classification or evaluation are not absolutely certain. Considering the basic requirements for the assessments, we can suppose that they obey a normal distribution in some cases, such as human performance evaluation. In fact, it is an optimal design for the evaluation rule with a particular purpose. Normally, we hope that the evaluation results retain some diversity. The number of subjects attaining the best and worst scores should not be too high, while the middle part of the distribution should contain the great majority. Furthermore, if we do not have enough knowledge about the evaluation parameters, we can also require the results to be reasonable based on the normal distribution.

Condition 3. Analysis based on the normal distribution constraints has certain data requirements. Small sample sizes cannot be adapted to the method in this paper and thus there should be at least 30 samples. In a general way, these conditions can be satisfied in the case of vendor performance evaluation in a large company, teacher performance in a college, and student capacity evaluation in a university. Moreover, if one has particular requirements, the extension modes can be adopted according to the following parts.

In this section, we proposed the extensions for the model above. First, if we want the total discrimination degrees between the alternatives under the requirement of normal distribution, the mathematical model can be changed to the following:

For (15), one multiple object model can be obtained. Thus, we can transform it to the single object model through the weighted method. According to the model, the discrimination degree among the alternatives takes its maximum.

Second, if we want the evaluation results to be divided into several categories and some categories have certain requirements, it is assumed that there are categories () and alternatives in ; the mathematical model can be changed into (16), where represents the required information for some categories:

When applying the method, there is an applicable requirement that the assessment results should obey the normal distribution. It is important to note that the statistical method in this paper cannot be used for the groups with smaller sample sizes, but we can refer to this type of consideration. For example, by classifying the numerical size of the off-target distance according to descending values, we can make the number of the large, small, and medium clusters obey certain proportion and meet the approximate bell-shaped distribution.

7. Conclusion

In this paper, we studied a grey target decision model by considering that the assessment results obey the normal distribution. Based on the kurtosis and skewness testing rules, we proposed an optimization model for the target centers and criteria weights with normal distribution constraints of off-target distance. We designed a PSO algorithm and analyzed its superiority by comparing it with different methods. The approach proposed in this present study is applicable to the decision-making situation in which the alternatives are of a relatively large size as well as a normal distribution. We can see that the method suggested in the paper is easy to understand and use, which can make the alternatives obey a normal distribution. There are two possible perspectives that can be studied as the future work of our research. First, the method can be extended to more other application cases. Particularly, with the development of the network, more alternatives and massive information may be involved in the decision-making. Thus, the proposed method can satisfy more application fields. Second, other distribution styles can be considered as well as the analysis on the corresponding analysis models and algorithms. In the paper, we mainly focused on the normal distribution. However, the other distribution styles considering the criteria interactions may exist. Thus, new aggregation functions (e.g., Choquet integral and Sugeno integral) need a detailed analysis in the further research. The consideration of Mahalanobis distance is useful to deal with problems with criteria interactions. Prior information or massive data may be helpful to solve the aggregation model. We should also judge the special case and select the suitable method to appropriately solve more complex decision-making problems.

Conflicts of Interest

The authors declare that they have no conflicts of interest.

Acknowledgments

This work was supported by the National Natural Science Foundation of China (Projects nos. 71502073 and 71171112), the Humanities and Social Science Foundation of Ministry of Education (Project no. 14YJC630120), Qing Lan Project of Jiangsu, and the Doctor Foundation of Jinling Institute of Technology (Project no. jit-b-201419).

References

  1. Salvatore. G., Multiple Criteria Decision Analysis: State of the Art Surveys, Springer, 2016.
  2. Y.-S. Huang, W.-C. Chang, W.-H. Li, and Z.-L. Lin, “Aggregation of utility-based individual preferences for group decision-making,” European Journal of Operational Research, vol. 229, no. 2, pp. 462–469, 2013. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  3. T. L. Saaty, The Analytic Hierarchy Process, John Wiley & Sons, Ltd., 1980. View at MathSciNet
  4. S.-P. Wan and D.-F. Li, “Fuzzy LINMAP approach to heterogeneous MADM considering comparisons of alternatives with hesitation degrees,” Omega , vol. 41, no. 6, pp. 925–940, 2013. View at Publisher · View at Google Scholar · View at Scopus
  5. J.-j. Peng, J.-q. Wang, J. Wang, L.-J. Yang, and X.-h. Chen, “An extension of {ELECTRE} to multi-criteria decision-making problems with multi-hesitant fuzzy sets,” Information Sciences, vol. 307, pp. 113–126, 2015. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  6. A. Punkka and A. Salo, “Preference Programming with incomplete ordinal information,” European Journal of Operational Research, vol. 231, no. 1, pp. 141–150, 2013. View at Publisher · View at Google Scholar · View at Scopus
  7. M.-L. Tseng, “Green supply chain management with linguistic preferences and incomplete information,” Applied Soft Computing, vol. 11, no. 8, pp. 4894–4903, 2011. View at Publisher · View at Google Scholar · View at Scopus
  8. J. Zhang, D. Wu, and D. L. Olson, “The method of grey related analysis to multiple attribute decision making problems with interval numbers,” Mathematical and Computer Modelling, vol. 42, no. 9-10, pp. 991–998, 2005. View at Publisher · View at Google Scholar · View at Scopus
  9. M. Dong, S. Li, and H. Zhang, “Approaches to group decision making with incomplete information based on power geometric operators and triangular fuzzy AHP,” Expert Systems with Applications, vol. 42, no. 21, pp. 7846–7857, 2015. View at Publisher · View at Google Scholar · View at Scopus
  10. J. B. Yang and D. L. Xu, “Nonlinear information aggregation via evidential reasoning in multiattribute decision analysis under uncertainty,” IEEE Transactions on Systems, Man and Cybernetics, Part A: Systems and Humans, vol. 32, no. 3, pp. 376–393, 2002. View at Publisher · View at Google Scholar · View at Scopus
  11. H. B. Yan, V. N. Huynh, Y. Nakamori, and T. Murai, “On prioritized weighted aggregation in multi-criteria decision making,” Expert Systems with Applications, vol. 38, no. 1, pp. 812–823, 2011. View at Publisher · View at Google Scholar · View at Scopus
  12. R. R. Yager, “Families of {OWA} operators,” Fuzzy Sets and Systems, vol. 59, no. 2, pp. 125–148, 1993. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  13. V. Torra and Y. Narukawa, “On a comparison between MAHalanobis distance and CHOquet integral: the CHOquet-MAHalanobis operator,” Information Sciences, vol. 190, pp. 56–63, 2012. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  14. Z. S. Xu, “An overview of methods for determining OWA weights,” International Journal of Intelligent Systems, vol. 20, no. 8, pp. 843–865, 2005. View at Publisher · View at Google Scholar · View at Scopus
  15. S. M. Arabzad, M. Ghorbani, J. Razmi, and H. Shirouyehzad, “Employing fuzzy TOPSIS and SWOT for supplier selection and order allocation problem,” The International Journal of Advanced Manufacturing Technology, vol. 76, no. 5–8, pp. 803–818, 2014. View at Publisher · View at Google Scholar · View at Scopus
  16. E. Siskos, D. Askounis, and J. Psarras, “Multicriteria decision support for global e-government evaluation,” OMEGA - The International Journal of Management Science, vol. 46, pp. 51–63, 2014. View at Publisher · View at Google Scholar · View at Scopus
  17. H. Doukas, “Modelling of linguistic variables in multicriteria energy policy support,” European Journal of Operational Research, vol. 227, no. 2, pp. 227–238, 2013. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  18. V. V. Podinovski, “Decision making under uncertainty with unknown utility function and rank-ordered probabilities,” European Journal of Operational Research, vol. 239, no. 2, pp. 537–541, 2014. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  19. A. Jiménez, A. Mateos, and P. Sabio, “Dominance intensity measure within fuzzy weight oriented MAUT: An application,” OMEGA - The International Journal of Management Science, vol. 41, no. 2, pp. 397–405, 2013. View at Publisher · View at Google Scholar · View at Scopus
  20. T. Entani and M. Inuiguchi, “Pairwise comparison based interval analysis for group decision aiding with multiple criteria,” Fuzzy Sets and Systems, vol. 274, pp. 79–96, 2015. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  21. X. Zhao, R. Lin, and G. Wei, “Hesitant triangular fuzzy information aggregation based on Einstein operations and their application to multiple attribute decision making,” Expert Systems with Applications, vol. 41, no. 4, pp. 1086–1094, 2014. View at Publisher · View at Google Scholar · View at Scopus
  22. L.-W. Lee and S.-M. Chen, “Fuzzy decision making based on likelihood-based comparison relations of hesitant fuzzy linguistic term sets and hesitant fuzzy linguistic operators,” Information Sciences, vol. 294, pp. 513–529, 2015. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  23. X. Yang, L. Yan, and L. Zeng, “How to handle uncertainties in {AHP}: the Cloud Delphi hierarchical analysis,” Information Sciences, vol. 222, pp. 384–404, 2013. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  24. V. Torra and Y. Narukawa, “Modeling decisions: information fusion and aggregation operators, Cognitive Technologies,” Springer, 2007. View at Google Scholar · View at Scopus
  25. V. Torra, Y. Narukawa, and M. Sugeno, Non-additive Measures. Theory and Applications, Springer, 2014.
  26. V. Torra, “Some properties by Choquet integral based probability functions,” Acta et Commentationes Universitatis Tartuensis de Mathematica, vol. 19, no. 1, pp. 35–47, 2015. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  27. J.-L. Marichal, “An axiomatic approach of the discrete Choquet integral as a tool to aggregate interacting criteria,” IEEE Transactions on Fuzzy Systems, vol. 8, no. 6, pp. 800–807, 2000. View at Publisher · View at Google Scholar · View at Scopus
  28. P. C. Mahalanobis, “On the generalised distance in statistics,” in Proceedings of the Natl. Inst. Sci. India, vol. 2 (1), pp. 49–55, 1936.
  29. J. Rice, Mathematical statistics and data analysis, Nelson Education, 2006.
  30. B. S. Ahn, “Extreme point-based multi-attribute decision analysis with incomplete information,” European Journal of Operational Research, vol. 240, no. 3, pp. 748–755, 2015. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  31. Y. H. Chun, “Multi-attribute sequential decision problem with optimizing and satisficing attributes,” European Journal of Operational Research, vol. 243, no. 1, pp. 224–232, 2015. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  32. W.-E. Yang and J.-Q. Wang, “Multi-criteria semantic dominance: a linguistic decision aiding technique based on incomplete preference information,” European Journal of Operational Research, vol. 231, no. 1, pp. 171–181, 2013. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  33. J. L. Deng, “Introduction to grey system theory,” The Journal of Grey System, vol. 1, no. 1, pp. 1–24, 1989. View at Google Scholar · View at MathSciNet
  34. S. Liu and Y. F, Grey information: Theory and Practical Applications, Springer-Verlag, London, UK, 2006.
  35. S. Chen, Z. Li, and Q. Xu, “Grey target theory based equipment condition monitoring and wear mode recognition,” Wear, vol. 260, no. 4-5, pp. 438–449, 2006. View at Publisher · View at Google Scholar · View at Scopus
  36. D. Luo and X. Wang, “The multi-attribute grey target decision method for attribute value within three-parameter interval grey number,” Applied Mathematical Modelling: Simulation and Computation for Engineering and Environmental Systems, vol. 36, no. 5, pp. 1957–1963, 2012. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  37. J. Zhu and K. W. Hipel, “Multiple stages grey target decision making method with incomplete weight based on multi-granularity linguistic label,” Information Sciences, vol. 212, pp. 15–32, 2012. View at Publisher · View at Google Scholar · View at Scopus
  38. W. Mao, D. Luo, and H. Sun, “A multi-scale extended grey target decision method that considers the value distribution information of grey numbers,” Grey Systems: Theory and Application, vol. 7, no. 1, pp. 97–110, 2017. View at Publisher · View at Google Scholar
  39. V. F. Yu, A. A. N. P. Redi, Y. A. Hidayat, and O. J. Wibowo, “A simulated annealing heuristic for the hybrid vehicle routing problem,” Applied Soft Computing, vol. 53, pp. 119–132, 2017. View at Publisher · View at Google Scholar · View at Scopus
  40. G. Palubeckis, “A variable neighborhood search and simulated annealing hybrid for the profile minimization problem,” Computers & Operations Research, vol. 87, pp. 83–97, 2017. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  41. A. K. Das, S. Das, and A. Ghosh, “Ensemble feature selection using bi-objective genetic algorithm,” Knowledge-Based Systems, vol. 123, pp. 116–127, 2017. View at Publisher · View at Google Scholar · View at Scopus
  42. D. Biswas, S. Panja, and S. Guha, “Multi Objective Optimization Method by PSO,” Procedia Materials Science, vol. 6, pp. 1815–1822, 2014. View at Publisher · View at Google Scholar
  43. N. Chouikhi, B. Ammar, N. Rokbani, and A. M. Alimi, “PSO-based analysis of Echo State Network parameters for time series forecasting,” Applied Soft Computing, vol. 55, pp. 211–225, 2017. View at Publisher · View at Google Scholar · View at Scopus
  44. R. Rao, V. J. Venkata, and Savsani., “Advanced Optimization Techniques,” in Mechanical Design Optimization Using Advanced Optimization Techniques, pp. 195–229, Springer, London, UK, 2012. View at Google Scholar