Abstract

For the problem of multi-attribute group decision-making with heterogeneous preference information on attribute values and overall preference orderings on alternatives, this article proposes a neural network-based approach. In the approach, firstly, the heterogeneous preference information on attribute values and overall preference orderings on alternatives are normalized. Secondly, based on the normalization results, two optimization models are set up to determine attribute weights and expert weights, respectively. Thirdly, two neural networks are set up and trained to determine attribute weights and expert weights based on the optimization models. Then, the overall values of the alternatives are obtained as well as their rankings. Simulations on the proposed neural networks are conducted for illustrations.

1. Introduction

In the course of multi-attribute decision-making, a group of experts is always invited to take part in the voting work [15]. For some complex multi-attribute decision-making problems, for example, the international joint project selections across different countries, the experts may come from different countries. The invited experts participate in ranking and selecting the projects with the help of the Web by providing their evaluation information. They can evaluate the projects against every attribute and can also give their overall opinions on the projects, for example, the preference orderings on the projects. Both the evaluation information of projects on the attributes and the overall preference orderings on the projects are usually applied in ranking and selecting the projects.

The current researches on the multi-attribute group decision-making (MAGDM) problems with preference information fall into two categories, that is, with preference information on attributes and with preference information on alternatives (e.g., the assessment and selection of international joint projects). The preference information can be used to calculate the attribute weights. The determination of attribute weights is an important factor for alternative selection in multiple attribute group decision-making, which plays a crucial role in the rankings or selection of the alternatives [6, 7]. This article focuses on the MAGDM problems with two sources of information, that is, the preference information on alternatives from the experts and attribute values in the decision matrix. Both sources of information in the MAGDM problems can be used to calculate the attribute weights and determine the alternative rankings based on the experts’ collective information.

For the MAGDM problems, the preference information on alternatives can be used to derive their rankings or selection directly [3] or to determine the attribute weights [8, 9]. The attribute values in the decision matrix can also be used to determine the attribute weights, for example, the entropy method [10, 11] and maximizing deviation method [12, 13]. For the MAGDM problems with the preference information on alternatives and decision matrix, it is desirable to establish optimization models to determine attribute weights [8, 14, 15].

Since the research question focused in this study is to integrate the attribute values in the decision matrix with the experts’ preference information on alternatives, the neural network is a good choice for memorizing the experts’ preference information. However, it is not common to solve the MAGDM problems with heterogeneous preference information on alternatives by means of neural networks. In this article, two optimization models are established, respectively, to solve the attribute weights and expert weights for the MAGDM problems with heterogeneous preference information on alternatives and decision matrix. In addition, based on the established optimization models, two neural networks are designed to figure out the attribute weights and expert weights. Furthermore, in order to train the neural networks, the attribute values of the alternatives given by the experts are employed as the inputs and the preference information on alternatives from the experts are adopted as the expected outputs. After training the neural networks, the attribute weights are obtained, which match the attribute values of the alternatives with the preference information on alternatives given by the experts. Then, the obtained attribute weights from the neural networks are used for guiding the alternative rankings, which is the novelty of this study. The intention of the research innovation of this study is to apply neural networks in MAGDM problems with heterogeneous preference information on alternatives and decision matrix.

The organization of this article is as follows. Section 2 gives a literature review and motivation. Section 3 describes the MAGDM problems with heterogeneous preference information on decision matrix and alternatives. A neural network-based approach to the MAGDM problems is proposed in Section 4, where the decision information is normalized, and the two optimization models are established. Two neural networks are designed and trained to determine attribute weights and expert weights. Section 5 gives simulations for illustrating the proposed approach. Section 6 summarizes the study in this article.

2. Literature Review and Motivation

2.1. The Preference Information in MAGDM

In the course of multi-attribute group decision-making, based on the experts’ preference information, the weights of attributes can be obtained. In [8, 14], the preference information on alternatives is employed to determine the attribute weights by setting up optimization models, respectively.

Also, in the course of multi-attribute group decision-making, based on the experts’ preference information, the weights of experts can be obtained. In [5], a new linear programming model is proposed to find the optimal expert weights based on deviation function. In [16], an entropy-based approach is proposed to determine the weights of experts by defining a projection measure for the hybrid information representations.

For the MAGDM problems with heterogeneous preference information on decision matrix and alternatives, it is desirable to establish optimization models to determine attribute weights [8, 14, 15]. In [8], an integration approach is proposed to deal with experts’ fuzzy preference information on alternatives and decision matrix by setting up an optimization model. In [14], three sources of information are integrated into a general model framework: experts’ fuzzy preference relation on alternatives, experts’ multiplicative preference relation on attributes, and decision matrix. In [15], a consistency-based approach is proposed to the MADM problems with preference information on alternatives, by defining a geometric consistency index. An algorithm is proposed to adjust the preference information and the decision matrix simultaneously to improve the geometric consistency index in MADM.

In this study, two neural networks are employed to memorize the experts’ preference information on alternatives and trained by dealing with the attribute values in decision matrix at the same time.

2.2. Research on MAGDM with Preference Based on Neural Networks

Neural networks have the ability of self-adaptive and self-learning and are used in some decision support processes, for example, foreign currency risk prediction [17], demand forecasting [18], travelers’ choice patterns recognition [19], and service providers selection [20].

In [17], a neural network is used to predict foreign exchange rate movement direction and magnitude, considering multiple macroeconomic and microstructure of foreign exchange market variables. In [18], a new approach is proposed to forecast the uncertain customer demand in the multilevel supply chain system by means of the neural network method. In [19], a neural network-based approach is developed to investigate the travelers’ decision rule heterogeneity, where the neural network is trained to recognize the travelers’ choice patterns among four distinct decision rules so that they are classified. In [20], an adaptive fuzzy-neuro approach is proposed for selecting the group of service providers, in which the maintenance service network in agriculture is designed and evaluation criteria are defined from both qualitative and quantitative aspects.

It is noticed that neural networks begin to be applied in multiple attribute decision-making processes, such as [21, 22]. In [21], an approach is proposed to integrate neural networks and data envelopment analysis to evaluate suppliers under incomplete information of evaluation criteria. In [22], a neural network is used to learn the relation among criteria and alternatives and rank the alternatives.

2.3. Motivation of the Study

However, it is not common to capture and represent the preference information of experts by means of neural networks and then assist in solving the MAGDM problems with heterogeneous preference information in an uncertain decision-making environment. It is desirable to employ neural networks to memorize the preference information from the experts and the decision processes by means of training the attribute weights. Also, the neural network is needed for guiding the decision-making processes by memorizing the experts’ preference information.

The motivation of this article is to develop and apply neural networks for MAGDM problems with heterogeneous preference information in an uncertain environment and to aid the decision-making task without the experts’ efforts further. The necessity of using neural network-based approach in the MAGDM problems is to integrate the attribute values in the decision matrix with the experts’ preference information on alternatives, by means of training the neural networks and memorizing the two sources of information by weights. The contributions of this article are to develop and apply neural networks for memorizing the experts’ preference information in MAGDM problems with heterogeneous preference information in an uncertain environment.

3. Problem Descriptions

This article considers the MAGDM problems where the decision matrix is presented with heterogeneous information (i.e., preference orderings [23], interval numbers, linguistics [24, 25], and uncertain linguistic variables [26, 27]) and the invited experts give their overall preference orderings on the alternatives. The following assumptions and notations are used to describe the MAGDM problems.

3.1. Composition of the Problem

This article considers the uncertain group decision problem where the experts provide their decision matrix with heterogeneous preference information (i.e., preference orderings, interval numbers, linguistic terms, and uncertain linguistic variables) and give their overall preference orderings on the alternatives. The following assumptions and notations are used to present the MAGDM problems.

Let E = {e1, e2, …, eP} denote the set of experts, who provide preference information on alternatives. On one hand, the experts give their preference information on alternatives against the attributes, respectively. On the other hand, they also express their overall preference orderings on the alternatives.

Let S = {S1, S2 ,…, Sm} denote a discrete set of m (>1) possible alternatives.

Let C = {C1, C2 ,…, Cn} denote a set of n (>1) attributes.

3.2. Experts’ Preference Information

In this article, each expert provides his/her preference information on the alternatives according to each attribute by means of preference orderings, interval numbers, linguistic terms, and uncertain linguistic variables.

Let denote the decision matrix given by expert ek, k = 1,2,…,P, where is the preference information on alternative Si with respect to attribute Cj, i = 1,2,…,m, j = 1,2,…,n.

Column j of decision matrix collects preference information on the alternatives according to attribute Cj , and is expressed by means of preference orderings, vectors of interval numbers, vectors of linguistic terms, and vectors of uncertain linguistic variables:(1)An ordinal ranking is used by an expert ek to express his/her preference on the alternatives according to attribute Cj, j = 1,2,…,n, where is a permutation function over the index set {1, …, m} and represents the position of alternative Si against attribute Cj. Usually, the alternatives are ranked from the best to the worst.(2)A vector of interval numbers is used by an expert ek to express his/her preference on the alternatives according to attribute Cj , j = 1,2,…,n, where is an interval number of the preference information on alternative Si, i = 1, 2,…, m.(3)A vector of linguistic terms is used by an expert ek to express his/her preference on the alternatives according to attribute Cj, j = 1,2,…,n [26, 27], where is a linguistic term of the preference information on alternative Si, i = 1,2,…,m. It is assumed that (i = 1,…,m) is from the linguistic term set TERMSETj for attribute Cj, and for different attributes, the granularities of TERMSETj may be different, j = 1, 2,…, n.(4)A vector of uncertain linguistic variables is used by an expert ek to express his/her preference on the alternatives according to attribute Cj, j = 1,2,…,n, k = 1,2,…,P, where is an uncertain linguistic variable of the preference information on alternative Si against attribute Cj, i = 1,2,…,m, j = 1,2,…,n, k = 1,2,…,P. It is assumed that and (i = 1,…,m, j = 1,2,…,n, k = 1,2,…,P) are from the linguistic term set TERMSETj for attribute Cj, and for different attributes, the granularities of TERMSETj may be different, j = 1, 2,…,n.

In addition, the experts also express their overall preference on the alternatives in the form of preference orderings. Suppose is given by expert ek, k = 1,2,…,P and denotes the overall preference ordering of alternative Si, i = 1, 2,…, m.

`The problem focused is to rank the alternatives and select the best one based on the preference information on the attribute values (i.e., , k = 1,2,…,P) and the overall preference orderings on the alternatives (i.e., , k = 1,2,…,P).

In the following section, a new approach to the problem is proposed by setting up optimization models and neural networks. The heterogeneous preference information on the alternatives against the attributes, that is, , k = 1,2,…,P, are normalized. In the meantime, the experts’ overall preference orderings on the alternatives, that is, , k = 1,2,…,P, are normalized too. Based on the normalization results, two optimization models are set up and neural networks are designed to obtain the attribute weights and expert weights, based on which the weighted sum method is employed to calculate the overall values of the alternatives.

3.3. Focus of the Problem

The problem focused is to rank the alternatives and select the best one based on the preference information on the heterogeneous attribute values in the decision matrix and the overall preference orderings on the alternatives. In the following section, a neural network-based approach is proposed, where the heterogeneous preference information on the attribute values is normalized. Two optimization models are set up and accordingly, two neural networks are designed and trained to obtain the attribute weights and expert weights. Finally, the rankings of the alternatives are obtained.

The neural networks proposed in this study are intended to solve the MAGDM problems with heterogeneous preference information on the alternatives, by means of their learning abilities, which can memorize the subjective preference information given by experts and further guide the solution to the MAGDM problems.

4. The Proposed Approach

In the proposed approach, four steps are employed.Step 1. Normalize the heterogeneous attribute values in the decision matrix and the overall preference orderings on the alternatives.Step 2. Set up optimization models to determine attribute weights and expert weights.Step 3. Set up neural networks to determine attribute weights and expert weights based on the optimization models set up in step 2.Step 4. Calculate the overall values of the alternatives.

4.1. Normalize Decision Information

In the decision matrix given by expert ek (k = 1,…,P), can be expressed in the forms of preference orderings, interval numbers, linguistic terms, and linguistic variables for different j, , and therefore normalization on are necessary to make them comparable (i.e., the utilities values between 0 and 1), denote the normalization result of as , k = 1,…, P.

4.1.1. Normalize Preference Orderings

For the attributes which are evaluated by means of preference orderings, that is, for Cj, the following method can be applied to normalize it [1]:

Then, the attribute value of alternative Si can be obtained:

Furthermore, the attribute value of alternative Si can be normalized by the following way:

4.1.2. Normalize Intervals

Firstly, normalize the intervals into dimensionless ones by means of the following methods.

For beneficial attributes:

For the costly attributes:

Then transform into the crisp values, and obtain .

Definition 1. Given intervals and , i, k= 1,…,m, j = 1,…,n, the distance between them is defined as:

Definition 2. Given the attribute values of intervals for attribute , the positive ideal attribute value for it is defined as:where

Definition 3. Given the attribute values of intervals for attribute , the negative ideal attribute value for it is defined as:whereThen the distance between attribute value and and are obtained as follows:

Definition 4. Given the attribute values of intervals for attribute , the relative distance between and is defined as:

4.1.3. Normalize Linguistic Information

Linguistic information is very helpful for experts to express their opinions in uncertain decision-making situations. For different attributes, linguistic terms employed for assessment tasks are often of different granularities.

Suppose TERMSET = {l0, l1,…,l} is a linguistic term set with elements. By definitions in [28], a linguistic term set is orderly and there are inverse operators.

Definition 5. If a linguistic term li is expressed in the form of triangular fuzzy numbers , then its membership function is [28]:where , and are the lower and upper bounds of triangular fuzzy numbers, respectively. Suppose TERMSET = {l0 = none, l1 = worse, l2 = bad, l3 = fair, l4 = good, l5 = very good, l6 = excellent}. The linguistic terms and their corresponding triangular fuzzy numbers in this TERMSET are shown in Table 1. In addition, the triangular fuzzy numbers for the linguistic terms are shown in Figure 1.
Given a linguistic term lh which is expressed with the triangular fuzzy numbers , then the utility value of it is defined as follows [29]:Thus, given the attribute values aij which are expressed with linguistic terms for attribute , then the following method is used to normalize aij:

4.1.4. Normalize Linguistic Variables

Definition 6. An interval is called an uncertain linguistic variable, if both ends are linguistic terms, where and are the lower and upper limits and are both linguistic terms.

Definition 7. Suppose and are two uncertain linguistic variables, then the distance between and are defined as follows [29]:

Definition 8. Given the attribute Cj that is assessed by means of uncertain linguistic variables (e.g., ), then the positive attribute value for it is defined as:where

Definition 9. Given the attribute that is assessed by means of uncertain linguistic variables , then the grey correlation degree between and L_pos+j is defined as follows:where sep () is the distance function between two uncertain linguistic variables given in Definition 7 and L_ is the positive attribute value for attribute as given in Definition 8. ρ is a parameter and is set as 0.5 usually.
Therefore, by means of the above steps (i.e., (16)–(19)), attribute values which are expressed with uncertain linguistic variables are normalized into the utility values between 0 and 1.
Based on the methods stated above, the heterogeneous attribute values in from an expert ek are normalized, and denote the results as , k = 1,…, P.

4.2. Set Up Optimization Models to Determine Attribute Weights and Expert Weights

The following notations are adopted to facilitate describing the proposed approach.

Let denote the vector of attribute weights, where is the weight of attribute and is between 0 and 1, , while .

Let denote the vector of attribute weights that is derived from the preference information of expert (i.e., and , where is the weight of attribute and is between 0 and 1 , while .

By the method in (1), suppose is normalized into ,.

Let denote the vector of expert weights, where is the weight of expert .

4.2.1. Set Up the Optimization Model to Determine Attribute Weights

Definition 10. Given the normalized decision matrix from expert and his/her overall preference on the alternatives ,…,, the distance between the overall values of Si and the expert’s preference on it is defined as follows:In order to determine the attribute weights that minimize the distance disik (i = 1,…,m, k = 1,…,P) as in Definition 10, the following model is set up:s.t.After solving model (21a)–(21c), the optimal attribute weights can be obtained, as well as the values of the distance disik (i = 1,…,m, k = 1,…,P), which are denoted as the elements in the following matrix dis:

4.2.2. Set Up the Optimization Model to Determine Expert Weights

In order to determine the expert weights which minimize the elements in matrix dis (i.e., the values of disik, i = 1,…, m, k = 1,…,P) for all the alternatives, the function should be minimized. The optimization goal is desired to be 0. However, the value of is larger than 0, thus, the desirable goal is chosen as 0.0001 and the following optimization model is set up:s.t.

4.3. Set Up Neural Networks to Determine Attribute Weights and Expert Weights
4.3.1. Set Up a Neural Network to Determine Attribute Weights

(1) Network structure. According to models (21a)–(21c), a linear neural network is employed to train the attribute weights, as shown in Figure 2. The proposed neural network is composed of three components: the input layer, the output layer, and the expected output.

For the input layer, the attribute values of every alternative are used as the input data, for example, (bi1k,bi2k,……bink). The number of attributes is used as the nodes of input layer, for example, n. Since there are m alternatives, then m sample data are adopted to train the neural network.

For the output layer, there is only one node and it denotes the output of the neural network. The output of the neural network is the weighted sum of the input data, while the connections between the input points and the output point are labeled with the attribute weights .

The distance between the output of the neural network and the expected output yik is used for correcting the attribute weights.

(2) Reasoning Process. For alternative Si, there are n attribute values that are the inputs of the neural network as proposed in Figure 2, and the corresponding output is as follows:

Since yik is the expected output for the network, then the error function is defined as follows:

Thus, for all the alternatives, the overall error of the sample data is as follows:

The training process is to make Errork as minimal as possible by adjusting the attribute weights wik. The weight coefficient correction (i.e., gradient descent) formula is as follows:where is the learning rate.

Based on the discussion above, the reasoning process is composed of the following steps:Step 1. Initialize the proposed neural network as shown in Figure 2, and set the initial attribute weights as .Step 2. Add input sample and expected output. In order to ensure that the weight value adjustment is greater than 0 in the training process, the expected output of each sample is adjusted at will, and a constant is added at the same time. The new expected output is set as .Step 3. Calculate the actual output of the output layer according to formula (24).Step 4. Calculate the error between the expected output value and the actual output according to formula (26); if the error meets the predefined training accuracy requirements, then stop the network training process, otherwise, go to step 5.Step 5. Adjust the weighting coefficients between the output layer and the input layer according to (27).Step 6. When the maximum number of training times is reached and the weights after training are positive, the training exit, otherwise go to step 2.

4.3.2. Set Up a Neural Network to Determine Expert Weights

In order to fully reflect the roles of experts in the process of group decision-making, a neural network is proposed for training expert weights, as shown in Figure 3. This article continues to take similar steps to train and obtain the weights of experts, as stated earlier.

After training the neural network (as shown in Figure 2) by means of the sample data, that is, the attribute values of alternatives given by the P experts, the errors or distances between the expected output values and the actual outputs are obtained, as stated in (25). Then , …, , i = 1,…,m, are employed as the input for training the expert weights.

By denoting the training results as , k= 1,…,P, the final expert weights can be obtained by means of the following formula:

Therefore, the attribute weights can be obtained as

4.4. Calculate the Overall Values of the Alternatives

Based on the normalized decision matrix , the attribute weights as in (29), and the expert weights in (28), the overall values of the alternatives can be obtained as

The alternatives can be ranked according to their overall values as obtained in (30).

5. Illustrations and Comparisons

5.1. Illustrations

In this illustration, five projects (i.e., alternatives, S1, S2, S3, S4, and S5) are evaluated and chosen for investments. Four attributes are employed for the assessment task, that is, industry status (C1), R&D investment (C2), technological innovation (C3), and market prospect (C4), and they are assumed to be beneficial. Three experts are invited to evaluate the alternatives against the attributes and give their overall evaluation on the alternatives are shown in Tables 24, respectively.

For the first attribute “industry status,” the experts give their evaluation information by means of preference orderings. For the attribute “R&D investment,” the experts give their evaluation information by means of intervals. For the attribute “Technological innovation,” the experts give their evaluation information by means of linguistic terms from {none, worse, bad, fair, good, very good, excellent}. For the attribute “Market prospect,” the experts give their evaluation information by means of uncertain linguistic variables, while t0 = none, t1 = worse, t2 = bad, t3 = fair, t4 = good, t5 = very good, t6 = excellent.

Based on the discussions above, the decision information from experts is normalized and the results are as follows:

Expert e1 gives his/her overall preference orderings on alternatives as (5,2,3,4,1), which can be normalized into .

Expert e2 gives his/her overall preference orderings on alternatives as (5,3,1,4,2), which can be normalized into .

Expert e3 gives his/her overall preference orderings on alternatives as (4, 1, 2, 3, 5), which can be normalized into .

Based on Figure 2, the following neural network is designed for simulations by means of MatLab. As stated in Figure 4, there are four input nodes, which correspond to the four attributes, and five samples correspond to the attribute values of the five alternatives.

The initial attribute weights are set as for the three experts, k = 1,2,3. The learning rate of the network is set as 0.001 and the maximum number of iterations is set as 3000.

Based on the normalized decision information of expert e1, after going through 3000 times of iterations, the attribute weights and errors are obtained and stated in Tables 5 and 6, respectively.

Based on the normalized decision information of expert e2, after going through 3000 times of iterations, the attribute weights and errors are obtained and stated in Tables 7 and 8, respectively.

Based on the normalized decision information of expert e3, after going through 3000 times of iterations, the attribute weights and errors are obtained and stated in Tables 9 and 10, respectively.

Furthermore, in order to determine the expert weights, based on the neural network in Figure 3, the neural network for training expert weights is applied in MatLab is stated in Figure 5.

The number of input neurons is three and the number of output neurons is one. The inputs are the values in the matrix dis obtained by the above process for training the attribute weights, stated as follows:

The expected output is set as 0.0001. The learning rate of the network is set as 0.001 and the maximum number of iterations is set as 3000. After training the above neural network in Figure 5, the expert weights are obtained, as stated in Table 11.

Therefore, the attribute weights can be obtained as W=(0.1839, 0.3492, 0.1927, 0.2742). The overall values of the alternatives can be obtained as d1 = 0.3818, d2 = 0.6963, d3 = 0.72, d4 = 0.4028, and d5 = 0.5489. Thus, the final rankings of the alternatives is .

5.2. Comparisons with Other Methods

In order to display the advantage of the neural network-based approach proposed in this article, a comparison is made with the maximum deviation method with respect to the three experts’ decision information. As stated in Tables 1214, the results of the attribute weights are different. The reason is that the maximum deviation method deals with the attribute values in the decision matrix. However, the neural network-based approach proposed in this article integrates the attribute values in the decision matrix with the experts’ preference information on alternatives, by means of training the neural networks and memorizing the two sources of information by weights.

Furthermore, in terms of calculating the overall values of alternatives, for the new data of attribute values, the maximum deviation method has to be applied to figure out the attribute weights and calculate the overall values of alternatives. However, with the new data of attribute values, the neural network-based approach proposed in this article will obtain the overall values of alternatives without solving the optimization models to get attribute eights again, since the neural network will output the overall values of alternatives.

6. Conclusions

By means of setting up two optimization models and designing two neural networks, an approach is proposed to deal with the multi-attribute group decision-making problems with heterogeneous preference information on attribute values and overall preference orderings on alternatives. The first neural network is trained for attribute weights by using attribute values as the inputs and overall preference orderings on alternatives as the expected outputs. The second neural network is trained for deriving the expert weights.

The merits of the proposed approach lie in three aspects: (1) setting two models to optimize the attribute weights and the expert weights; (2) based on the optimization model for attribute weights, the neural network is designed and trained, which remembers experts’ preference against attribute values and on alternatives. The trained neural network can be used for new entries of preference information on attribute values and obtain the overall values of alternatives automatically, which satisfies the experts’ preference on alternatives; (3) based on the optimization model for expert weights, the neural network is designed and trained for memorizing the experts’ weights, which would be the guidance for decision-making process.

To summarize, the contributions of this article are as follows: (1) the proposed approach presents a new way of solving the uncertain multi-attribute group decision-making problems with heterogeneous preference information on attribute values and overall preference orderings on alternatives. (2) The proposed neural networks enable the experts’ preference information to be remembered and allow new data entry on attribute values, as well as obtain the overall values of alternatives without solving the optimization models again. The proposed approach and the neural networks provide guidance for decision-makers in uncertain multiple attribute group decision-making problems with preference information on alternatives.

In group decision-making under uncertainty, it is important to reach consensus. The hot studies will focus on consensus reaching process in MAGDM with heterogeneous preference information [3032]. The following studies will be consensus reaching for personalized individual semantics-based social network group decision-making by means of neural networks. In addition, personalized individual semantics-based consistency control and consensus reaching in group decision-making will also be the hot topic [3032].

Data Availability

The data used to support the findings of this study are available from the corresponding author upon request.

Conflicts of Interest

The authors declare no conflicts of interest.