Abstract

The proposed hesitant fuzzy linguistic set (HFLS) is a powerful tool for expressing fuzziness and uncertainty in multiattribute group decision-making (MAGDM). This paper aims to propose novel aggregation operators to fuse hesitant fuzzy linguistic information. First, we briefly recall the notion of HFLS and propose new operations for hesitant fuzzy linguistic elements (HFLEs). Second, considering the Muirhead mean (MM) is a useful aggregation technology that can consider the interrelationship among all aggregated arguments, we extend it to hesitant fuzzy linguistic environment and propose new hesitant fuzzy linguistic aggregation operators, such as the hesitant fuzzy linguistic Muirhead mean (HFLMM) operator, the hesitant fuzzy linguistic dual Muirhead mean (HFLDMM) operator, the hesitant fuzzy linguistic weighted Muirhead mean (HFLMM) operator, and the hesitant fuzzy linguistic weighted dual Muirhead mean (HFLWDMM) operator. These operators can reflect the correlations among all HFLEs. Several desirable properties and special cases of the proposed operators are also studied. Furthermore, we propose a novel approach to MAGDM in a hesitant fuzzy linguistic context based on the proposed operators. Finally, we conduct a numerical experiment to demonstrate the validity of our method. Additionally, we compare our method with others to illustrate its merits and superiorities.

1. Introduction

MAGDM is an activity that selects the optimal alternative under a set of attributes assessed by a group of decision-makers. Owing to the increased complexity in decision-making, one of the difficulties in practical MAGDM problems is representing attribute values in fuzzy and vague decision-making environments. In 1965, Zadeh [1] originally proposed an effective tool, called fuzzy set (FS), for depicting and expressing impreciseness and uncertainty. Since its introduction, FS has received substantial attention and has been studied by thousands of scientists worldwide in theoretical and practical aspects [2]. Thereafter, several extensions of FS have been proposed, such as interval-valued fuzzy set [3], intuitionistic fuzzy set (IFS) [4], interval-valued intuitionistic fuzzy set [5], type 2 fuzzy set [6], and neutrosophic set [7]. In the past decades, these fuzzy sets have been successfully applied to decision-making [826]. However, these tools are unsuitable to cope with circumstances in which decision-makers are hesitant between a few different values when determining membership degree. Therefore, Torra [27] proposed the concept of hesitant fuzzy set (HFS), which permits the membership degree of an element to a set to be represented by a set of possible values between 0 and 1, in order to address such cases.

Additionally, decision-makers cannot make quantitative decisions with limited priori knowledge and insufficient time due to the high complicacy of actual decision-making problems. Thus, qualitative methods are used to express the decision-makers’ preference information. The linguistic term set (LTS) can be used for the convenient assessment of linguistic variables rather than numerical values. Motivated by the HFS, Rodríguez et al. [28] proposed the hesitant fuzzy linguistic term set (HFLTS), in which several linguistic terms are used to evaluate a linguistic variable. However, the HFLTS cannot reflect possible membership degrees of a linguistic term to a given concept [29]. For example, in an intuitionistic linguistic set [30], a linguistic variable and an intuitionistic fuzzy number are used to describe the fuzzy attributes of an alternative. To overcome the drawback of HFLTSs, Lin et al. [31] introduced HFLSs, in which HFSs are utilized to express the hesitancy of decision-makers in selecting the membership degrees for a linguistic term. The basic elements of the HFLSs are called HFLEs. For example, a possible HFLE can be denoted as where is a linguistic term, and is a collection of membership degrees that describe fuzziness, uncertainty, and hesitancy of decision-makers when providing a linguistic term. Evidently, is a hesitant fuzzy element (HFE). HFLS can be employed to evaluate an object from two aspects, namely, a linguistic term and an HFE. The former can evaluate the object as “medium,” “poor,” or “too poor,” and the latter can express the hesitancy of decision-makers in giving a linguistic term.

In the MAGDM process, one of the most significant steps is aggregating the decision-makers’ preference information. In the past years, aggregation operators have gained increasing research attention. One of the most classical and popular aggregation operators is the ordered weighted averaging (OWA) operator, which was introduced by Yager [32] for crisp numbers. The OWA operator was extended to IFSs [33], HFSs [34], and dual HFSs [35]. Moreover, researchers have proposed extensions of the OWA operators, such as induced OWA [36] and generalized OWA operator [37]. However, the OWA operator and its extensions do not consider the interrelationship among the arguments. Thus, these operators assume that attributes are independent, which is inconsistent with reality. Therefore, scholars have focused on operators which can capture the interrelationship among arguments. The Bonferroni mean (BM) [38] and Heronian mean (HM) [39] are two crucial aggregation technologies that consider the interrelationship between any two arguments. Recently, these operators have been extended to aggregate hesitant fuzzy linguistic information, and a number of hesitant fuzzy linguistic aggregation operators have been proposed [31, 4043]. However, the correlations among arguments are ubiquitous, which means that the interrelationship among all arguments should be considered, as such, the BM and HM are inadequate and insufficient. The MM [44] is a well-known aggregation operator that considers the interrelationship among all arguments and possesses a parameter vector that leads to a flexible aggregation process. In addition, some existing operators are special cases of MM. Recently, MM has been investigated in intuitionistic fuzzy [45] and 2-tuple linguistic environments [46]. To the best of our knowledge, no research has been performed on MM under HFLSs. Hence, the MM operator should be extended to HFLSs. The present study investigates the MM under hesitant fuzzy linguistic environment and proposes new aggregation operators for HFLEs. The contribution of this paper is that we propose new operators for aggregating hesitant fuzzy linguistic information that can capture the interrelationship among all HFLEs. Furthermore, we apply the developed operators to MAGDM in which attribute values take the form of HFLEs.

The aims and motivations of this paper are (1) to propose some new aggregation operators to aggregate HFLEs and (2) to propose a novel approach to MAGDM problems. The rest of the paper is organized as follows. Section 2 briefly recalls basic concepts, such as HFLS and MM. Section 3 proposes several hesitant fuzzy linguistic Muirhead mean operators. Section 4 describes the developed weighted aggregation operators. Section 5 proposes a novel approach to MAGDM within the hesitant fuzzy linguistic context. Section 6 validates the proposed method by providing a numerical example. The final section summarizes the paper.

2. Preliminaries

In this section, we briefly review concepts about HFLSs and their operations. The concepts of MM and dual Muirhead mean (DMM) are also introduced.

2.1. Linguistic Term Sets and Hesitant Fuzzy Linguistic Sets

Definition 1. Let be an LTS with odd cardinality, where represents the possible value for linguistic variable and should satisfy the following [47]: (1) the set is ordered: if ; (2) the negation operator is set, such that ; (3) the max operator is if ; and (4) the min operator is if . For example, when , a possible LTS S can be defined as

Lin et al. [31] then introduced HFLSs based on HFSs and LTSs.

Definition 2 [31]. Let be an ordinary fixed set, then a hesitant fuzzy linguistic set (HFLS) on can be defined as where is the linguistic term, and is the HFE that denotes the possible membership degrees of the element to . For convenience, is called an HFLE by Lin et al. [31].

Example 1. Let be an ordinary fixed set. A possible HFLS defined on can be . If we divide into three subsets that contain only one object, then the three HFLEs are , , and . In , 0.3 denotes the membership degree that belongs to . In , 0.1 and 0.6 denote the possible membership degrees that belongs to . In , 0.2, 0.4, and 0.5 represent the possible membership degrees that belongs to . Notably, is a special case of HFLE in which only one membership degree is assigned in the corresponding HFE.

Evidently, an HFLE is a combination of linguistic terms with HFE, which takes the advantages of the two. Compared with linguistic terms, HFLE contains an HFE that permits several possible membership degrees and denotes the degrees to which an alternative belongs to in a corresponding linguistic term. Therefore, HFLE can more accurately and appropriately express the fuzziness, uncertainty, and hesitancy of decision-makers than crisp linguistic variables. Compared with HFE, HFLE has a linguistic term that evaluates an object as “poor,” “middle,” or “good.” Hence, linguistic terms and HFEs can only evaluate objects from one aspect, whereas HFLSs can evaluate objects from two aspects, namely, qualitative (linguistic terms) and quantitative evaluations (HFEs). Therefore, HFLSs are more powerful than HFSs and LTSs.

HFLEs can be used in real decision-making problems. For example, the linguistic term (s2) “poor” is acceptable for evaluating the functionality and technology of an ERP system by four decision-makers. Three decision-makers are required to provide their preference information under the value “poor” (s2). If the first decision-maker provides 0.1, the second decision-maker provides 0.3, the third decision-maker provides 0.6, and the fourth decision-maker provides 0.8, then the combination evaluation can be denoted by . Based on this analysis, HFLS is a powerful and effective decision-making tool.

Lin et al. [31] introduced a law to compare any two HFLEs.

Definition 3 [31]. For an HFLE , the score function of is , where is the number of values in ; for convenience, is also called the length of . For any two HFLEs and , if , then ; if , then .

Additionally, Lin et al. [31] introduced several operations for HFLEs.

Definition 4 [31]. Let , , and be any three HFLEs, and be a positive crisp number, then (1),(2),(3),(4).

However, these operations for HFLEs are complicated to use. For example, let , by Definition 4, we obtain . When aggregating a set of HFLEs, the aggregated values are very complicated. Hence, we should simplify the operations for HFLEs. Motivated by the simplified operations for HFEs introduced by Liao et al. [48], we introduce new operations for HFLEs.

Definition 5. Let , , and be any of the three HFLEs satisfying and be a positive crisp number, then (1),(2),(3)(4),where , , , and represent the tth smallest values of ,, and , respectively.

Remark 1. In Definition 5, we assume that all HFEs have the same number of values that cannot be always satisfied. To solve the problem, Xu and Xia [49] introduced a transformation regulation for HFEs by assuming that all decision-makers are pessimistic (or optimistic). The transformation regulation can be described as follows. For two HFEs and , let . If , then should be extended by adding the minimum (or maximum) value until it has the number of values as . If , then should be extended by adding the minimum (or maximum) value until it has the same number of values as .

2.2. Muirhead Mean

MM is an aggregation operator for crisp numbers introduced by Muirhead [44]. This operator can capture the interrelationship among all aggregated arguments.

Definition 6 [44]. Let be a collection of crisp numbers and be a vector of parameters, then MM is defined as Where is any permutation of (), and is the collection of all permutations of ().

Furthermore, Liu and Li [45] proposed the DMM operator.

Definition 7 [45]. Let be a collection of crisp numbers and be a vector of parameters, then the DMM operator is defined as where is any permutation of (), and is the collection of all permutations of ().

3. Hesitant Fuzzy Linguistic Muirhead Mean Operators

In this section, we extend MM and DMM to the hesitant fuzzy linguistic environment and develop new aggregation operators for aggregating HFLEs.

3.1. Hesitant Fuzzy Linguistic Muirhead Mean Operator

Definition 8. Let be a collection of HFLEs where holds for all , and be a vector of parameters, then the hesitant fuzzy linguistic Muirhead mean (HFLMM) can be defined as where is any permutation of , and is the collection of all permutations of .

According to the operations for HFLEs, the following theorem can be obtained.

Theorem 1. Let be a collection of HFLEs with holding for all and be a vector of parameters, then the aggregated value of HFLMM is also an HFLE and

Proof 1. According to Definition 5, we have Thus, Furthermore, Therefore Hence, (6) is maintained.
Considering that , we can obtain Therefore, Thus, where . Therefore, (6) is an HFLE that completes the proof.

The HFLMM operator has the following properties.

Theorem 2 (monotonicity). Let and be two collections of HFLEs; if these conditions are satisfied for all : (1) ; (2) ; (3) , where , then

Proof 2. Let and .
From Theorem 1, we can know that and are two HFLEs and Given that , we have Therefore, Thus, that is, .
Given that we can obtain Therefore, Thus, Furthermore, Thus, that is, .
As and , then by (18) and (23), we have , which completes the proof.

Theorem 3 (idempotency). If are equal, that is, , then

Proof 3. According to Theorem 1, we can obtain

Theorem 4 (boundedness). Let be a collection of HFLEs as holds for all . If and , then

Proof 4. According to Theorems 2 and 3, we can obtain and

Hence, we can obtain .

In the following, we will explore special cases of HFLMM operators with respect to parameter vector .

Case 1. If , then HFLMM is reduced to the hesitant fuzzy linguistic averaging (HFLA) operator.

Case 2. If , then HFLMM is reduced to the generalized hesitant fuzzy linguistic averaging (GHFLA) operator.

Case 3. If , then HFLMM is reduced to the hesitant fuzzy linguistic Bonferroni mean (HFLBM) operator.

Case 4. If , then HFLMM is reduced to the hesitant fuzzy linguistic Maclaurin symmetric mean operator.

Case 5. If , then HFLMM is reduced to the hesitant fuzzy linguistic geometric averaging (HFLGA) operator.

3.2. Hesitant Fuzzy Linguistic Dual Muirhead Mean

Definition 9. Let be a collection of HFLEs with holding for all , and be a vector of parameters, then the hesitant fuzzy linguistic dual Muirhead mean (HFLDMM) can be defined as where is any permutation of and is the collection of all permutations of .

Similar to HFLMM operator, we can obtain the following theorem according to Definition 5.

Theorem 5. Let be a collection of HFLEs as holding for all and be a vector of parameters, then the aggregated value of HFLDMM is also an HFLE and

Proof 5. According to Definition 5, we have and Therefore, Furthermore, Thus, Hence, (36) is maintained.
Given that , then we can obtain and then Therefore, Thus, Hence, Therefore, (35) is an HFLE, which completes the proof.

Similar to HFLMM, HFLDMM has the following theorems that can be easily proven.

Theorem 6. Let be a collection of HFLEs. (1)Monotonicity: let be a collection of HFLEs; if these conditions are satisfied for all : ; ; , where , then (2)Idempotency: if are equal, that is, , then (3)Boundedness: if and , then

In the following section, we will explore special cases of the HFLDMM operator with respect to parameter vector .

Case 1. If , then HFLDMM is reduced to the HFLGA operator.

Case 2. If , then HFLDMM is reduced to the generalized hesitant fuzzy linguistic geometric averaging (GHFLGA) operator.

Case 3. If , then HFLDMM is reduced to the hesitant fuzzy linguistic geometric Bonferroni mean (HFLGBM) operator.

Case 4. If , then HFLDMM is reduced to the hesitant fuzzy linguistic geometric Maclaurin symmetric mean (HFLGMSM) operator.

Case 5. If , then HFDLMM is reduced to the HFLA operator.

4. Hesitant Fuzzy Linguistic Weighted Muirhead Mean Operators

Evidently, HFLMM and the HFLDMM do not consider the weights of the associated HFLEs. Therefore, we develop hesitant fuzzy linguistic weighted Muirhead mean operators that consider the weights of HFLEs.

Definition 10. Let be a collection of HFLEs with holding for all , and be a vector of parameters. The weight vector is, satisfying and . If then is the hesitant fuzzy linguistic weighted Muirhead mean (HFLWMM), where is any permutation of and is the collection of all permutations of .

According to Definition 5, we can obtain the following theorem.

Theorem 7. Let be a collection of HFLEs with holding for all and be a vector of parameters, then

The proof of Theorem 7 is similar to that of Theorem 1 and is thus omitted to save space.

Definition 11. Let be a collection of HFLEs with holds for all and be a vector of parameters. The weight vector be, satisfying and . If then we call the hesitant fuzzy linguistic weighted dual Muirhead mean (HFLWDMM) operator, where is any permutation of and is the collection of all permutations of .

According to Definition 5, we can obtain the following theorem.

Theorem 8. Let be a collection of HFLEs with holds for all and be a vector of parameters, then the aggregated value of HFLWDMM is also an HFLE and

The proof of Theorem 7 is similar to that of Theorem 1 and is thus omitted.

5. Novel Approach to MAGDM with Hesitant Fuzzy Linguistic Information

In this section, we propose a novel approach to MAGDM based on the proposed aggregation operators. A typical MAGDM problem, wherein the attribute values take the form of HFLEs, can be described as follows: let be a set of alternatives and be attributes with the weight vector being , satisfying and . A set of experts is organized to act as decision-makers and for attribute of alternative . Decision-makers are required to express their assessments anonymously by using an HFLE that can be denoted by . Therefore, the hesitant fuzzy linguistic decision matrix expressed as can be obtained. We propose a new method to MAGDM in the following.

Step 1. Normalize the decision matrix. The original decision matrix should be normalized from two points of view. First, if attributes can be divided into benefit and cost types, then the decision matrix should be normalized by where and denote the benefit and cost types, respectively. Second, we assume that all the HFEs in HFLEs have the same number of values to simplify the calculation process. Thus, we can extend the short HFEs of the corresponding HFLEs according to the transformation regulation presented in Definition 5 until all HFEs have the same number of values.

Step 2. For alternative , utilize the HFLWMM operator or the HFLDWMM operator to aggregate all the preference information provided by decision-makers.

Step 3. Calculate the scores of the overall values and rank them according to Definition 3.

Step 4. Rank the alternatives according to the corresponding rank of overall values and select the best alternative.

6. Numerical Example

In this section, we provide a numerical example adopted from [31] to validate the proposed approach. A company decides to implement an enterprise resource planning (ERP) system. After primary evaluation, five potential ERP systems are chosen as candidates. To select the best ERP system, the company invites several professional experts to aid in this decision-making. The candidates are from four aspects (attributes), namely, (1) functionality and technology G1, (2) strategic fitness G2, (3) vendor’s ability G3, and (4) vendor’s reputation G4. The weight vector of the attributes is . Decision-makers are required to evaluate the five possible ERP systems anonymously. A hesitant fuzzy linguistic decision matrix is presented in Table 1, where is a series of HFLEs. In the following section, we utilize the proposed approach to solve the problem.

6.1. Decision-Making Process

Step 1. Normalize the decision matrix. All the attributes are benefit attributes and thus, we only need to extend the short HFLEs by adding certain values. Here, we assume that decision-makers are optimistic. Therefore, we can obtain the normalized hesitant fuzzy linguistic decision matrix, as shown in Table 2.

Step 2. For each alternative , (60) is used to aggregate all the attribute values. Therefore, we can obtain a set of overall values. Without loss of generality, we assume .

Step 3. Calculate the scores of and rank them. According to Definition 3, we can derive . Therefore, the rank order of the overall values is .

Step 4. Rank alternatives according to the rank of , that is,. Therefore, is the best ERP system.

In Step 2, if we utilize (61) to aggregate attribute values, then we can obtain a series of overall values

The scores of the overall values are as follows:

Therefore, we can obtain , where is the best alternative.

6.2. Further Discussion

To demonstrate the effectiveness of the proposed approach, we utilize other methods to solve the above example. These methods include the ones proposed by Lin et al. [31] based on hesitant fuzzy linguistic weighted averaging (HFLW) operator and HFLW geometric (HFLWG) operator, by Lin et al. [40] and Wei et al. [41] based on hesitant fuzzy linguistic hybrid average operator, by Lin et al. [31] based on hesitant fuzzy linguistic prioritized weighted average (HFLPWA) operator and hesitant fuzzy linguistic prioritized weighted geometric (HFLPWG) operator, and by Liu [42] based on hesitant fuzzy linguistic weighted Bonferroni mean (HFLWBM) operator and hesitant fuzzy linguistic weighted geometric Bonferroni mean (HFLWBM) operator. Details are presented in Table 3.

As seen from Table 3, if we use the HFLWA and the HFLWBM operators, we can obtain the same ranking result as that derived by the HFLWMM operator. If we use the HFLWHA, HFLPWA, and HFLPWG operators, we can obtain the same ranking result as that obtained by the HFLWDMM operator. Therefore, the proposed approach is effective in handling MAGDM with hesitant fuzzy linguistic information. To further demonstrate the merits and superiorities of the newly developed approach, we compare our method with those in [31, 4143]. Table 4 presents several characteristics of these operators.

Through a comparison with other approaches and aggregation operators, we draw the following conclusions. The weaknesses of the approaches based on HFLWA and HFLWG are that (1) the calculation is based on the operational laws in Definition 4, and they are too complicated to use and (2) the methods cannot capture the interrelationship among arguments. The proposed approach in this paper is more general and flexible than those based on HFLWA and HFLWG. First, the new decision-making approach is based on the new operational laws in Definition 5, resulting in simple calculation. Second, the new method considers the interrelationship among all the arguments. Third, the new approach is more flexible than those based on the HFLWA and HFLWG operators, as HFLWA is a special case of HFLWMM, and HFLWG is a special case of HFLWDMM.

The HFLPWA and HFLPWG operators can consider the entire interrelationship among the HFLEs being fused but can only be used to address the problems with unknown attribute weights. Moreover, the HFLPWA and HFLPWG operators are not as flexible as the HFLWMM and HFLWDMM operators, which have parameter weight .

The HFLWBM and HFLWGBM operators can only consider the interrelationship between any two arguments. However, in real decision-making problems, all the arguments provided by decision-makers are correlated, which means that interrelationships among all the arguments should be considered. The HFLWMM and HFLWDMM operators can capture all the correlations and consider parameter weight, resulting in flexible information aggregation.

In summary, the proposed HFLWMM and HFLWDMM exhibit the following advantages: (1) these operators are based on simplified operational laws that streamline the decision-making process, (2) they consider the interrelationship among all arguments, and (3) they have parameter vector , which leads to flexible information aggregation.

Parameter vector plays an important role in the ranking results. To demonstrate this, we investigate different cases by assigning different values to . Additional details can be found in Tables 5 and 6.

As shown in Tables 5 and 6, different ranking results can be obtained by assigning different values to parameter vector . The HFLWMM and HFLWDMM operators are two flexible aggregation operators. As shown in Table 5, the more interrelationships between attribute values are considered in the HFLWMM operator, the smaller the scores of the overall values are. As seen in Table 6, the more interrelationships between attribute values are considered in the HFLWDMM operator, the higher the scores of the overall values are. Another interesting property is observed when all the values in parameter are equal, that is, the scores of overall values and the ranking results are the same regardless of the values.

7. Conclusions

The HFLS is a powerful and efficient tool for describing the fuzziness, uncertainty, and hesitancy of decision-makers in MAGDM. In this paper, we introduced a new approach to MAGDM with hesitant fuzzy linguistic information. First, we investigated the MM under hesitant fuzzy linguistic environment and introduced the HFLMM, HFLDMM, HFLWMM, and HFLWDMM operators. These operators considered the interrelationship among all HFLEs. Moreover, the presence of the parameter vector leads to flexible information aggregation process. Second, we proposed a novel approach to MAGDM by using these operators. Third, we provided a numerical example and performed comparative analysis to illustrate the validity and advantages of the new approach. The proposed approach can be used to effectively solve MAGDM problems with hesitant fuzzy linguistic information. In the future, we will apply the proposed method to real decision problems. In addition, we will investigate more aggregation operators for fusing hesitant fuzzy linguistic information.

Conflicts of Interest

The authors declare that there is no conflict of interest regarding the publication of this paper.

Acknowledgments

This work was partially supported by a key program of the National Natural Science Foundation of China (NSFC) with grant number 71532002 and the Fundamental Research Funds for the Central Universities with grant number 2017YJS075.