Abstract

Group decision-making (GDM) in an ambiguous environment has consistently become a research focus in the decision science field during the past decade. Existing minimum cost consensus models either control the total budget in a deterministic context or focus on improving the utility of decision makers. In this study, a novel consensus model with a distributionally robust chance constraint (DRO-MCCM) is explored. First, two distributionally robust chance constraints consensus models are developed based on the varied utility preferences of decision-makers and taking into consideration the uncertainty of the unit adjustment cost. Next, construct conditional value-at-risk (CVaR) to approximate the cost chance constraint, simulate the viewpoint of decision makers with ambiguous preferences such as utility function and Gaussian distribution, and convert the model into a feasible semidefinite programming problem using dual theory and the moment method. Finally, the supply chain management scenario involving new product prices employs these models. Comparison and sensitivity analyses demonstrates the model’s superiority and effectiveness.

1. Introduction

Group decision-making (GDM) [14] refers to the process of two or more group members participating in a decision-making analysis on a problem that needs to be solved, each member expressing his own opinion, and finally selecting the approach to solve the problem from the available feasible solutions. In the process of GDM, group members often have to go through sufficient communication and multiple consultations, and some members gradually revise their initial opinions and preferences and finally reach a consensus [5, 6]. Therefore, GDM is often superior to individual decision-making and it is an expression of collective wisdom and opinion.

Because they are drawn from different educational backgrounds and represent various interest groups, decision-makers frequently have inconsistent preferences during the actual decision-making process. Therefore, reaching a consensus requires a moderator who will supervise and lead the entire decision-making process and ultimately promote the development of group opinions to consensus opinions. Moderators in GDM provide individual decision-makers with some financial compensation (the so-called “consensus cost”) for prompting decision-makers to revise their initial opinions. During the consensus negotiation process, on the one hand, the moderator hopes to reach a consensus with the least cost, and on the other hand, the decision-makers hope to get more compensation by revising their opinions. In addition, there are usually fewer resources available for reaching consensus. To this end, Dong et al. [7] initiated a minimum adjustment consensus model (MACM) to maximize the preservation of the decision-makers’ original preference information, and Ben-Arieh and Easton [8] put up a minimum cost consensus model (MCCM). By introducing the aggregation function, Zhang et al. [9] created an optimization model to link MACM and MCCM. Furthermore, Cheng et al. [10] suggested the modeling of minimum-cost consensus in the context of asymmetric adjustment costs for the variation in the unit cost of experts in raising or reducing the initial view. The consensus models for cost feedback or minimal adjustment mechanisms throughout the previous ten years were discussed by Zhang et al. [11]. They also mentioned some areas for further study. Considering DM’s different linguistic preference relations, Li et al. [12] and Zhang and Li [13, 14] build optimization models to assist decision-makers in making feedback adjustments in the consensus process to improve consensus efficiency. Gong et al. [15, 16] created a maximum utility optimization consensus model to try to maximize the utility of decision makers and moderators under limited resource constraints to improve the satisfaction of decision-makers in the process of obtaining consensus. This modeling approach is based on the case where the unit adjustment cost is a fixed value and therefore belongs to a deterministic environment modeling. However, in practical decision-making, influenced by experts’ knowledge background, trust level, and interest starting point, uncertainty fills the whole decision-making process. According to the current research status, there is still a lack of literature considering both minimum consensus cost and decision-maker utility in an uncertain environment, which makes this research direction have broad prospects.

Based on the uncertain environment, this paper builds a consensus decision-making model with distributionally robust chance constraints for two scenarios, which considers the random perturbation of unit adjustment cost, utility function, Gaussian distribution, and other forms to express the decision maker’s preference. In GDM, approaches to dispose of uncertainty usually include fuzzy interval analysis [1721], uncertainty theory [2224], stochastic programming [2527], and robust optimization [2831]. Although these methods cope with the influence of uncertainty and provide reasonable suggestions for group decision-making, the researchers found that these methods still have some shortcomings and practical situations where they are not applicable. The fuzzy interval method is difficult to deal with complex decision problems. The lack of historical data on random variables in stochastic programming can easily lead to an inaccurate precise distribution of uncertain parameters. Robust optimization [32] considers the value of unknown parameters in the worst case, and the results of this worst case are often too conservative [3335]. Therefore, distribution robust optimization comes into being. Distributionally robust optimization [3639] allows the unknown parameters of the model to exist in the uncertain set of a distribution function in the form of random variables, and the original model is transformed into a tractable linear program, second-order cone program or semidefinite programming through the method of moments. As a powerful tool for solving decision-making in uncertain environments, distributionally robust optimization combines the advantages of robust optimization and stochastic programming and is widely used in financial economics [4042] and supply chain management [4346].

The following are this paper’s primary innovations and contributions:(1)Our research proposed two different types of consensus decision-making models with distributionally robust chance constraints that consider both minimum consensus cost and maximum decision maker’s utility.(2)The model places the random variables that affect the unit adjustment cost into a set of probability distributions and characterizes the decision-maker’s preference in the form of uncertainty such as utility function and Gaussian distributions.(3)The moment technique and duality theory are employed to convert the consensus model with distributionally robust chance constraints into a semidefinite programming model that is simple to solve. CVaR is exploited to approximate the consensus cost opportunity constraint term.

The rest of this paper is organized as follows. The second part is model construction. The uncertainty about the cost of unit adjustment and the preferences of the DMs are taken into account in the third part, where two different forms of distributionally robust chance-constrained consensus models are proposed. The fourth part is a case study, which presents the results of the numerical experiments and conducts sensitivity analysis and comparative analysis, and the fifth part is the conclusion and future research directions.

2. Model Construction

2.1. Basic Minimum Cost Consensus Model

Ben-Arieh and Easton [8] proposed the concept of consensus cost, which is based on the following assumption that there are decision-makers, represents the view of the th DM, and is the consensus opinion. Let represent unit consensus cost of DM. Moreover, Zhang et al. [9] introduced an optimization model that can be applied to characterize the MCCM:

Subsequently, Gong et al. [15, 16] studied a consensus model considering decision-makers’ utility preference under restricted constraints, fully considering minimizing negotiation costs and maximizing utility value, and their model was constructed as follows:where is the total budget of consensus negotiation, is the utility value of the GDM process, is the utility adjustment coefficient; and are the utility functions of decision-making individuals and moderator, respectively, and are the upper and lower bounds of the decision-making individual’s opinion preference interval.

However, in real life, the uncertainty of data is always inevitable. Gong et al. [23] considered the cost and decision maker’s utility in the GDM as a whole, and proposed a consensus model with cost probability constraints and utility preferences:

2.2. Conditional Value at Risk Method

CVaR (conditional value-at-risk) refers to the conditional mean of losses exceeding VaR (value-at-risk) over a given period under normal market conditions and a certain confidence level, which represents the average level of excess losses. It is a risk measure developed based on VaR (value at risk) and is often used to measure losses in the worst-case scenario.

Suppose is a given confidence level, is a measurable loss function. The probability that it does not exceed the threshold can be expressed as . The VaR indicator is the potential maximum loss exceeding the probability :

Definition 1. See [47]. Suppose is a given confidence level, the function is continuous everywhere with respect to , the mean of the truncated distribution of the loss function can be defined aswhere is the density function of the random variable .
Since CVaR indicates that the target loss is not less than the expected value of VaR, its definition is generally not easy to calculate. Rockafellar et al. [47] gave an approximate calculation method, that is, CVaR can be equivalently transformed into

2.3. Distributionally Robust Chance Constrained Consensus Model

Generally speaking, many problems in GDM can be expressed by this optimization model with chance constraints:where the decision variable, the distribution of the probability P is known, and the uncertainty constraints and are affine dependent on the random variable , then we have

In the process of studying chance-constrained CVaR approximation, an auxiliary function is usually introduced to express the uncertainty of the model.

It can rewrite the chance constraint in (7) as

A conventional way to make chance constraints immune to uncertainty is the distributionally robust method. To this end, assuming that the information of the first and second moments of the probability distribution P is known, consider the following model:

In practice, the worst-case CVaR is often used to construct chance-constrained convex approximations. As stated in Corollary 2.1 in Zymler [48], if the loss function is convex or piecewise linear, the following expressions are equivalent:

This creates a bridge between distributionally robust chance constraints and worst-case CVaR. Next, the consensus model with distributionally robust joint chance constraints is constructed. On the premise that first and second moments information is known, we use the distributed robust optimization method to characterize the random variables in the unit adjustment cost. Both the constraints and the objective function are processed by the method of moments, which are transformed into a tractable semidefinite programming model:where is a random variable affecting the unit adjustment cost. According to the idea of Ben-Tal et al. [32] and the processing technique of Zymler et al. [48], it can be expressed as , so , then (13) can be rewritten as

According to the equivalence relation of (12), model (10) can be transformed into

The first and second moments of the random variable must be known to solve the model using the moment approach. Let defined as the mean, is defined as the variance , then the probability distribution can be expressed as

Let , using duality theory and the moment method (see Theorems A.1, B.1 in Appendix), the equivalent form of the distributionally robust chance-constrained consensus model (DRO-MCCM) can be obtained as follows:

2.4. DRO-MCCM with Different Preference Utility

Next, we comprehensively consider the utility preference relationship between decision-makers and moderators. The DMs’ preferred points of view are demonstrated by the statistical distribution and membership function of random variables, respectively. The moderator should completely take into account the decision maker’s utility preferences while balancing control over the entire expense of consensus during the negotiation period. Therefore, we propose the following two types of distributionally robust chance-constrained optimization consensus models (DRO-MCCM) considering utility preference and minimum negotiation cost.

Scenario 1. The DM’s opinion is expressed by utility preference, while the moderator’s opinion obeys the consensus model of random distribution
Assuming that the moderator’s viewpoint obeys the Gaussian curve and the decision-makers’ opinions are represented by the linear right membership function; the resultant of DRO-MCCM is as follows:

Scenario 2. The DM’s opinion is susceptible to random distribution, while the moderator’s viewpoint falls under the utility preference function’s decision area.
Assuming that the moderator’s opinion is a represented by the left membership function, and the DMs’ opinion b obeys the Gaussian distribution, the distributionally robust chance-constrained consensus decision-making model is as follows:

2.5. RO-MCCM

In Section 3.4, we will make a comprehensive comparative analysis of the models. To this end, we introduce the robust minimum cost consensus model (RO-MCCM) with different uncertainty sets (box set, ellipsoid set, and budgeted set).

Corollary 1. (See [32]). Consider the unit adjustment cost in model (2) placed in the box uncertainty set , the robust cost consensus model of the box set (Box-MCCM) can be obtained as follows:

Corollary 2. (See [32]). If the unit adjustment cost in model (2) fluctuates in the ellipsoid uncertainty set , then the robust cost consensus model (Epd-MCCM) of the ellipsoid uncertainty set is

Corollary 3. (See [32]). If the unit adjustment cost in model (2) is placed in the budget uncertainty set , the robust cost consensus model (Bud-MCCM) of the budgeted uncertainty set can be obtained as follows:

3. Numerical Experiment

Supply chain management refers to the integrated management of product flow, information flow, and capital flow from suppliers to customers to maximize the worth of the supply chain. It is a complex that includes procurement and supply, production operations, and logistics management. As a crucial link in operation management, the pricing of new products is related to the fate of the product itself and the future of the enterprise. It often requires multiple departments to participate in decision-making. Compared with other products, new products have the advantages of low competition and leading technology, but at the same time they also have the disadvantages of not being recognized by consumers and a high product cost. Therefore, when pricing new products, it is necessary to consider not only that the investment can be recovered as soon as possible and profits are obtained, but also that it is conducive to consumers’ acceptance of new products. To provide the greatest possible economic benefits, experts from a variety of professions collaborate to decide on new product pricing before finally coming to a consensus.

T Enterprises unveils a new purely electric vehicle, the Model Y, in 2021. Skimming pricing, penetration pricing, and satisfaction pricing are typically used for new product pricing. In several pricing meetings for the Model Y, we learned that the planning and technology departments expect to use skimming pricing to put the product on the market at a higher price before competitors enter the market, to obtain high value in the short-term profit, recover the investment as soon as possible, and reduce risk. The marketing department and the manufacturing department put forward the marketing strategy of penetration pricing method, and obtain high turnover with lower profit, develop product sales channels, preemptively occupy the market, and improve the reputation of enterprises and brands. The financial department, through a comprehensive evaluation of the advantages and disadvantages of the new pure electric vehicles (zero deed tax, strong power, comfortable handling, etc.), hopes to adopt a satisfactory pricing method from the perspective of controlling the balance of payments to the greatest extent and reducing risks. We label the experts in these five departments as , respectively. The board members must invest some resources (money, time, etc.) in their position as a moderator to bring all parties’ points of view together and reach an agreement, but it also desires to maintain a particular budget expense under control. According to experts, if a new product is priced excessively high, it will be challenging for it to be available on the market and the company will be exposed to very significant risks. However, if the cost of new items is too poor, businesses would be under pressure from high development expenditures. Due to the different knowledge backgrounds, behavior habits, and interests of experts in various fields, the opinions and preferences given are also different. Therefore, we construct the distributionally robust chance-constrained consensus decision-making model under two different scenarios. Use utility functions and probability distribution functions to express the opinion preferences of decision-makers and the board to optimize budgets and ensure that the utility of the majority of decision-makers is met. By introducing probabilistic cost constraints, the board of directors can roughly grasp the pricing results before consensus negotiation and make reasonable financial plans in advance, which helps to reach consensus effectively. Where (unit: ten thousand yuan) is the pricing data given by five experts. In this section, all experiments are done on a laptop with 8 GB of RAM and a 2.3 GHz i5 core.

3.1. Model Optimal Solution for Scenario 1

Suppose five experts from various fields , the unit modification cost of each DM is (unit: thousand RMB), since , Suppose the initial adjustment cost is , then take a random matrix that obeys the standard normal distribution.

fits the distribution as a random variable that influences the unit modification cost. We can obtain the expectation , and the covariance matrix.

Based on Section 2.4, we set the confidence level , the adjustment coefficient , the utility parameter , the negotiated budget of the board is B. In Scenario 1 (see the model (13)), we thought that the board opinion matches the Gaussian distribution with parameters of (27, 1) and the expert advice is expressed as a duration of a linear right membership function . To determine the ideal solution (minimum cost budget is 10.6325 and optimal utility value is 0.4485) for model (13), we adopt Monte Carlo simulations and the optimization toolkit RSOME.

3.2. Model Optimal Solution for Scenario 2

Based on Section 2.4, we consider the optimal solution results of the consensus decision model in Scenario 2 (see the model (14)). Here, it is assumed that the opinion of the board belongs to the left membership function on the interval . Expert opinions follow the Gaussian distribution . With the same assumptions as in Section 3.1, we get the optimal solution of the model (14) (the minimum consensus cost is 14.8903 and the optimal utility value is 0.3722).

Tables 1 and 2, respectively, show the changes in the optimal solution of the DRO-MCCM when the utility priority coefficient changes under the two scenarios. In Scenario 1, where , the consensus decision model’s objective function is just as significant as the minimum cost and the greatest utility. The five experts’ group opinions are all approximately to the lower limit of the utility function range, and at this point, the consensus negotiation cost and utility value are both reasonably low. From the table, when , the board increases the consensus budget, which improves the satisfaction of the experts, and the utility value is higher, so it can actively promote the consensus. When the utility coefficient is left alone in Scenario 2, the moderator’s preference follows a left-biased function, and the consensus opinion drifts toward the interval’s right-end threshold. As increases, the utility of experts is gradually valued in the consensus negotiation process. And with the surplus of the negotiated budget, the consensus opinion gradually tends toward the low threshold.

Tables 1 and 2 also show the results of distributionally robust consensus models for the two scenarios at different confidence levels. Both consensus cost and utility value increase as the confidence level rises. The scope of the violation probability narrows, the cost budget limits increasingly become more onerous, and the confidence level rises as a result. To avoid the phenomenon of overflowing the budget, the board of directors appropriately increases the cost of consensus negotiation. Low probability corresponds to low cost and low efficiency. However, when the confidence level of the probability constraint is high, the budget of the consensus cost has become saturated, so the total cost either no longer increases with a boost in the confidence level or the increase is not immediately apparent. According to the results in the table, the board of directors can control the probability of reaching a consensus under different budgets, and can also adjust the cost budget to reach a consensus.

3.3. Sensitivity Analysis

In Section 2, we introduce the method of moments to demonstrate the equivalence of transforming the original probability-constrained model into a tractable semidefinite program. When the first and second moments’ information is available, we use to represent the covariance matrix’s range and define to be the covariance factor. Moment information varies across different probability distributions. As a result, it is critical to investigate how changes in covariance components affect the model. Here, assume that the confidence level is 0.95, the utility adjustment coefficient , and the covariance factor is set to vary from 0.01 to 5.

Tables 3 and 4 display the distributionally robust chance-constrained consensus model for the two scenarios with various covariance factors. As the covariance factor grows, the minimum negotiation cost and maximum consensus utility also rise. Interestingly, when increases to around 4, both the budget and utility climbs tend to saturate, or the increase is not very noticeable. The bigger the value of , which is a measure of uncertainty in the covariance matrix of random variables , the more significant the disparity between the unknown parameters. The chairman of the board will try his best to accommodate everyone’s needs to come to a consensus, including increasing cost budgets and enhancing customer satisfaction. However, when the consensus cost increases to a certain level, experts will have a bottom-line consensus opinion even if the uncertainty fluctuates greatly.

3.4. Comparative Analysis

To further investigate whether it makes sense to consider both minimum consensus negotiation cost and decision-maker utility in a group decision model, this section compares our model with a minimum cost [8] and maximum utility [15] consensus model. We fix the parameters in DRO-MCCM, assuming a confidence level of 0.95, a utility coefficient , and the assumed unit modification cost in the maximum utility consensus model (MU-MCCM) to be the mean of 5 experts, 1.1. Additionally, this section contrasts the DRO-MCCM developed in this research with the minimum cost consensus model (MCCM) and the robust cost consensus model (RO-MCCM).

As we all know, the ultimate goal of GDM is to reach a consensus efficiently, and every expert in the decision-making activity expects that his opinion will attract enough attention. The results in Table 5 show that, despite the moderator incurring the lowest total cost, the MCCM is unable to capture the importance of decision-makers in the decision-making process. If the objective function is only the utility value, although there is a controlled budget in the constraints, it is not convenient for the coordinator to make timely adjustments to the financial budget in the decision-making process. The DRO-MCCM that takes into account the consensus cost budget and the utility of the decision maker as a whole has significant practical implications.

The comparison findings in Scenario 1 show that MCCM has the lowest expenses. The consensus model (Bud-MCCM) has the highest utility value (0.9407) after taking robustness into account, but its associated negotiation cost is likewise significant (22.2382). While in DRO-MCCM, the utility value is improved more (0.9082) with less cost (20.8542). The results of Scenario 2 also confirm this conclusion, but the variation of the value is wider. Hence, compared to existing consensus models, our distributionally robust chance-constrained consensus model is more suitable.

Figures 1 and 2 show the comparison of the results of the consensus decision model under uncertain parameters for the two scenarios, respectively. The total cost and utility of all models climb steadily as uncertain parameters increase. Since pure MCCM and MU-MCCM are models in a deterministic environment, their curves do not change with parameters, which mean that under certain circumstances, the board of directors can make optimal decisions without considering the uncertainty of unit adjustment costs. Compared with the other three robust consensus decision-making models, the Box-MCCM cost curve has the largest increase, and the other two robust models have a slightly weaker increase. Generally speaking, as the uncertainty level parameter becomes larger, the perturbation of the unit adjustment cost is more obvious, and the total negotiation cost will increase significantly, making the consensus more difficult to reach. It is interesting to note that once a certain threshold is reached, the DRO-MCCM function curve starts to smooth out rather than always grow. The cost and utility curves of DRO-MCCM in the two scenarios tend to be stable around parameter 6, so the model is the most robust. In both graphs, the utility curve climbs first and then plateaus. On the one hand, it is because the utility value has an upper bound when it is set. On the other hand, the moderator’s overall budget is constrained, and it is impractical to keep upping the cost indefinitely to boost the utility of DM. When dealing with the uncertainty of unit adjustment cost, DRO-MCCM can keep the utility of DM at a high level and maintain the negotiated total cost as much as possible. Therefore, when dealing with uncertain decision-making problems, the distributionally robust model is better than the robust consensus decision-making model. It is possible to successfully prevent the loss brought on by uncertainty in the process of GDM. Thus, research into a distributionally robust consensus decision model is important and necessary.

4. Conclusion

In this paper, the utility preferences of DMs in the context of uncertain decision-making circumstances are properly considered when studying a distributionally robust chance-constrained consensus model. We first assume that the random variable driving the variation in unit adjustment cost obeys a probability distribution with the provided moment information. Then, using dual theory and the moment technique, the original model is turned into a tractable semidefinite programming problem. CVaR is then utilized to approximate the opportunity constraint of the consensus cost. Next, we use the utility function and Gaussian distribution to simulate the opinion preference relationship between decision-makers and moderators and propose a distributionally robust chance-constrained consensus decision model under two scenarios. Finally, taking a new product launch in supply chain management as an example, numerical analysis is carried out using the RSOME toolbox. The results show that the consensus results with better performance can be obtained by considering distributionally robust and chance constraints, which is convenient for guiding the management of the company to make reasonable decisions and can express the cost consensus problem in group decision-making in a more realistic way. Based on the current study status, several future research directions are pointed out:(1)The research in this paper is limited to small-scale group decision-making problems, but in actual decision-making, the decision-making problems faced by decision-makers are often large-scale. Hence, working on large-scale group consensus decision-making in an ambiguous environment is a hot area.(2)Our model only considers single-stage decision-making, while in the real world, complex decisions often need to go through two stages or even multiple stages. Therefore, it may be very meaningful to consider two-stage consensus decision-making problems in future research.(3)Linguistic preference relationship, expert preference, and sentiment analysis are all effective methods to deal with subjective judgments. A promising area of research in recent years has been the abundance of studies on related consensus decision-making models.

Appendix

A. Transformation of Constraint

Theorem A.1. In (11), constraint is equivalent to (A.1)

Proof. From (8) we can easily see that the distributionally robust chance constraint can be conservatively approximated by the worst-case CVaR. According to the saddle point theorem [49], this constraint worst-case CVaR can be rewritten asThen, let , according to the definition of , can be converted into the following equivalent form:When the probability density function is regarded by the first constraint, the mean and covariance matrices are reflected by the second and third constraints, respectively. The dual theory [50] asserts that its Lagrangian function iswhere , , are Lagrange multipliers of the corresponding constraints. Accordingly, the dual function of (A.3) isSince , Slater’s condition [50] holds and the optimal dual gap is 0. The problem satisfies strong duality. This means that the best bound that can be obtained from the Lagrange dual function is tight. And since , the boundedness of the integral term is guaranteed only when , the equivalence of this inequality is as follows:The abovementioned equation can be transformed into two semidefinite constraint forms.Therefore, there areSo Theorem A.1 is proved.

B. Transformation of the Objective Function

Due to the strong robustness of the objective function, we continue to use dual theory and Lagrangian methods to process into a tractable form.

Theorem B.1. In (11), the objective function is equivalent to (B.1)

Proof. Similarly to the method in Section 2.3, the objective function using the moment method becomes the following form:In the same way, use the Lagrangian method to construct its dual function:Since , there is , it is equivalent to the following semidefinite constraint:Theorem B.1 is proved.

Data Availability

The data used to support the findings of this study are included within the article.

Conflicts of Interest

The authors declare that they have no conflicts of interest.

Acknowledgments

This work was supported by the National Natural Science Foundation of China (72074149).