Building Mathematical Models for Multicriteria and Multiobjective Applications 2019
View this Special IssueResearch Article  Open Access
Giancarllo Ribeiro Vasconcelos, Caroline Maria de Miranda Mota, "Exploring Multicriteria Elicitation Model Based on Pairwise Comparisons: Building an Interactive Preference Adjustment Algorithm", Mathematical Problems in Engineering, vol. 2019, Article ID 2125740, 14 pages, 2019. https://doi.org/10.1155/2019/2125740
Exploring Multicriteria Elicitation Model Based on Pairwise Comparisons: Building an Interactive Preference Adjustment Algorithm
Abstract
Pairwise comparisons have been applied to several real decision making problems. As a result, this method has been recognized as an effective decision making tool by practitioners, experts, and researchers. Although methods based on pairwise comparisons are widespread, decision making problems with many alternatives and criteria may be challenging. This paper presents the results of an experiment used to verify the influence of a high number of preferences comparisons in the inconsistency of the comparisons matrix and identifies the influence of consistencies and inconsistencies in the assessment of the decisionmaking process. The findings indicate that it is difficult to predict the influence of inconsistencies and that the priority vector may or may not be influenced by low levels of inconsistencies, with a consistency ratio of less than 0.1. Finally, this work presents an interactive preference adjustment algorithm with the aim of reducing the number of pairwise comparisons while capturing effective information from the decision maker to approximate the results of the problem to their preferences. The presented approach ensures the consistency of a comparisons matrix and significantly reduces the time that decision makers need to devote to the pairwise comparisons process. An example application of the interactive preference adjustment algorithm is included.
1. Introduction
Preference judgment is a key issue in multicriteria decisionmaking (MCDM) methods; MCDM is a generic term given to a collection of systematic approaches and methods developed to support the evaluation of alternatives in a context with many objectives and conflicting criteria [1, 2].
Multicriteria methods can be classified into three groups: first, aggregation methods based on a single criterion of synthesis, whose main representatives are the multiattribute utility theory, the analytic hierarchy process (AHP), and MACBETH. The second group is outranking methods, such as the PROMETHEE and ELECTRE methods. Finally, the third group consists of the interactive methods, such as multiobjective linear programming (PLMO) [3, 4].
Different types of cognitive and behavioral biases play an important role in decisionmaking (DM). The MCDM aims to help people make strategic decisions according to their preferences and an overarching understanding of the problem [2], and it is subject to various cognitive and procedural deviations [5]. These deviations can occur at all stages of the decisionmaking process (problem structuring, evaluation criteria and alternatives, and sensitivity analysis), although they can lead to incorrect recommendations.
In additive models, inconsistency can be perceived in two distinct situations of decision maker (DM) judgments: intercriteria and intracriteria evaluations. The intercriteria evaluation involves the elicitation procedure for determining the weights of the criteria, where inconsistencies have been reported such as ratio [6], swing [7], tradeoff [2, 8, 9], and fitradeoff [10]. In the intracriterion evaluation, a value function is determined for each criterion. At this stage, some methods consider a simplistic approach by assuming a linear value function, such as in Smarts and Smarter [7], or a more sophisticated approach to build a utility function that expresses risk attitudes [2]. Another group of methods builds the value function based on pairwise comparisons (PCs) of several preference statements (questioning), such as the analytic hierarchy process [11–13] and Macbeth [14, 15].
A common cognitive deviation in MCDM methods occurs when there is a PC of a large number of alternatives, which requires a great deal of cognitive effort by the DM [16–20]. In this case, several studies have reported concerns related to the applicability of this type of procedure in situations where the number of criteria and alternatives is quite large [17, 18, 20, 21]. In such situations, the number of PCs made by the DM grows at an alarming rate. The time that analysts spend with DMs is increasingly scarce, and convincing a top executive to spend hours, or even days, making PCs of alternatives and criteria is often not feasible.
This paper focuses on the preference judgment of a DM, based on PCs of a qualitative criterion, to build a value function. For this purpose, an analysis was conducted based on the PC process. The AHP is one of the best known multicriteria decision aid (MCDA) approaches and is thus widely used [1, 22]. AHP is an additive method proposed by Saaty [11–13] and, since its introduction, has attracted increasing attention from researchers [22–26]. The method converts subjective assessments of relative importance to a vector of priorities (value function of one criterion), based on PCs performed within each criterion. The comparative judgments are made using the fundamental scale devised by Saaty [11], and a consistency logic is applied to check the DM’s judgments [27].
In this article, we explore the influence of consistency and inconsistency in preference assessments—based on PCs—by performing an experiment with several individuals to assess their preferences based on a given situation. A high level of inconsistency has been verified within the literature (reference). Additionally, we evaluate an alternative procedure to assess the DM’s preferences. The preferences of a DM are assessed based on an interactive procedure of asking questions and adjusting preferences, thus reducing the time spent by the DM and assuming an acceptable level of possible inconsistencies.
This paper is organized as follows. The next section presents a review of the literature concerning the causes of inconsistencies in PCs, as well as several solutions. Section 3 describes the experiment conducted to verify the influence of inconsistency. Lastly, a preference adjustment algorithm in PCs is presented.
2. Literature Review
Benitez et al. [28] have found that human beings make more good decisions than bad ones throughout their lifetime and that irrationality is a common cause of bad decisions. They thus proposed a model based on the most likely choice to capture shifts in the decisionmaking (irrationality) process to reduce biases caused by inconsistent judgments.
A PC uses human abilities, such as knowledge and experience, to compare alternatives and criteria in a pairwise manner and assemble a comparisons matrix [29]. Inconsistency arises when some opinions of a comparisons matrix contradict others. Therefore, it is important to check the consistency of opinions when performing a series of calculations to arrive at the value of the consistency ratio (CR), which indicates the consistency of the comparisons matrix. For the PCs matrix, it is desirable that the CR for any comparisons matrix be less than or equal to 0.10 [25, 30].
The number of n PCs is ; hence grouping and hierarchy structure should be used for larger n [21]. Bozóki et al. [20] performed a controlled experiment with university students (N = 227), which enabled them to obtain 454 PCs matrices. They conducted experiments with matrices of different sizes from different types of multicriteria decision support methods and found that the size of a matrix affects the DM’s consistency (measured by inconsistency indices): An increasing matrix size leads to greater inconsistency.
Consider the multicriteria problem, where the DM must make a PC alongside five criteria and ten alternatives. In such a situation, the DM will have to allow time to perform 225 evaluations (Table 1), to compare the alternatives within each criterion. If we insert two more criteria, this number increases to 315 evaluations. Ten criteria with ten alternatives require 495 evaluations. The number of alternatives is certainly the largest source of comparisons, whereas all alternatives are compared considering each criterion. Equation (1) calculates the number of comparisons (CN) in the traditional method [21]:

where is the number of PCs of the traditional method, is the number of criteria, and is the number of alternatives.
Alternatively, Equation (2) calculates the number of comparisons in the Reciprocal Transitive Matrix (RTM) Method; this method was initially proposed by Koczkodaj and Szybowski [31] and called Pairwise comparisons simplified.
where is the number of PCs of the RTM Method.
Previous studies have already proposed several solutions to this problem. Weiss and Rao [32], for example, have proposed reducing the required number of questions to each DM through the use of incomplete blocks administered to different responders. Furthermore, Harker [33] developed an incomplete PC technique that aims to reduce this effort by ordering the questions in decreasing informational value and terminating the process when the value of additional questions decreases below a certain level.
A comparisons matrix with a CR equal to zero is representative of a fully consistent DM; this matrix is known as a RTM or a consistent matrix [31, 34]. The process of building a reciprocal transitive comparisons matrix requires the DM to conduct comparisons of only one line of the comparison matrix. The remaining values are determined by obeying the mathematical assumptions of a RTM. The resulting matrix comparison is consistent, based on the comparison made by the DM’s criteria or alternatives, with respect to a preselected criterion or alternative.
The process of building a RTM dramatically reduces the number of comparisons performed by the DM, as shown in Table 1. For example, consider a problem with seven criteria and eight alternatives. Using the traditional method, the DM performs 196 comparisons, while using the process of building a RTM reduces the number of comparisons to 49.
The effort of DMs is greatly reduced when building a RTM, which may reflect a more careful evaluation of PCs. On the other hand, an error or very imprecise evaluation of the initial comparison can cause distortion in the decision process [21]. Kwiesielewicz and Uden [35], for example, have found that the relationship between inconsistent and contradictory matrices of data exists as a result of the PC. The consistency check is performed to ensure that judgments are neither random nor illogical. The authors reveal that, even if a matrix successfully passes a consistency test, it can be contradictory. Thus, an algorithm for checking contradictions is proposed.
2.1. Issues with Pairwise Comparisons
Numerous studies have examined problems in the use of the PC [20, 36–38]. Some authors have dedicated their research to these problems. Some problems are related to this work, especially in the use of ratio scales and eigenvalue as a measure of inconsistency.
With regard to the ratio scale problem, Ishizaka et al. [38], Salo and Hämäläinen [39], Donegan et al. [40], and Lootsma [41] proposed new ratio scales to solve problems associated with the use of this type of scale. Goepel [37] and Koczkodaj et al. [41], moreover, conducted research comparing the scales but unanimously determined that the scales have limitations in their maximum value, which restricts the interpretation of the DM. However, using larger scales may increase uncertainties.
In some cases, an unlimited scale is required, especially when comparing measurable entities such as distance and temperature [42].
Another issue is the eigenvalue problem. Some authors agree that while the eigenvalue is used as a good approximation for consistent matrices, there are expressive results regarding the existence of better approximations, such as geometric means [42–44]. Some recent studies have found that the use of geometric means [45] is the only method that satisfies the principles of consistency and is immune to the problem of reversal order [45].
A PC matrix, refereed here as M, presents the relations between n alternatives. M is reciprocal if , for all M is consistent if it satisfies the transitivity property , for all [36, 46–48]. Note that while every consistent matrix is reciprocal, the converse is, in general, false [31].
When an n × n matrix is not consistent, it is necessary to measure the degree of inconsistency. One popular inconsistency index, proposed by Saaty [11], is defined as follows:
is the principal eigenvalue of M.
Let denote the average value of the randomly obtained inconsistency indices, which depends not only on but on the method of generating random numbers, too. The consistency ratio (CR) of M, indicating inconsistency, is defined by [11, 13]
If the matrix is consistent, then , so and .
Saaty [11, 13] introduced eigenvalues to verify the consistency of PC matrices. Furthermore, Saaty considers a matrix to be consistent when , meaning that 10% of the deviation of the largest eigenvalue of a given matrix from the corresponding eigenvalue of a randomly generated matrix. Saaty’s definition of consistency is good for any array order. The main disadvantage of Saaty’s definition of consistency is the rather unfortunate threshold of 10% [47].
Koczkodaj [47] introduced a new way of measuring inconsistencies of a PCs matrix based on the measure of the lowest deviation of an matrix in relation to a consistent RTM. The interpretation of the measure of consistency becomes easier when we reduce a basic reciprocal matrix to a vector of three coordinates . We know that is valid for each RTM; thus, we can always produce three RTM (vectors) by calculating a coordinate of the combination of the two remaining coordinates. These three vectors are [ and . The consistency measure (CM) is defined as the distance relative to the nearest RTM, represented by one of these three vectors for a given metric. Considering Euclidean (or Chebysheff) metrics, we have [47]
Koczkodaj’s consistency index [36, 42, 46, 47, 49–54] not only measures the inconsistency, but it also shows where it is larger, thus guiding the DM to reassess and correct the inconsistency. Additionally, Bozóki and Rapcsák [46] investigated some properties of the PCs matrix inconsistencies of Saaty and Koczkodaj. The results indicate that the determination of inconsistency requires further study.
Considering Saaty’s inconsistency index, some questions remain to be answered [46]: What is the relationship between an empirical matrix of human judgments and a randomly generated one? Is an index obtained from several hundred randomly generated matrices the correct reference point to determine the level of inconsistency of the matched comparisons matrix constructed from human decisions for a real decision problem? How can one take matrix size into account more precisely?
Considering Koczkodaj’s consistency index, an important issue seems to be the elaboration of the thresholds in higher dimensions or the replacement of the index with a refined rule of classification [46].
2.2. Correcting Inconsistencies
Wadjdi et al. [16] investigated the importance of data collection using forms to ensure the consistency of comparison matrices. The authors have proven that the proposed form can guarantee data consistency in the PCs matrix. The form can also be used in any other techniques, such as FuzzyAHP, TOPSIS, and FuzzyTOPSIS.
Xu [55] defined the concepts of incomplete reciprocal relation, additive consistent incomplete reciprocal relation, and multiplicative consistent incomplete reciprocal relation and subsequently proposed two goal programming models based on additive consistent incomplete reciprocal relation and multiplicative consistent incomplete reciprocal relation, respectively, to obtain the incomplete reciprocal relation priority vector.
Pankratova and Nedashkovskaya [56] employed a computer simulation to compare these methods of consistency improvement without the participation of a DM. It has been found that, taking an inadmissible inconsistent matrix of comparison with the consistency ratio equal to CR = 0.2 or CR = 0.3, consistency improvement methods can help decrease inconsistency up to admissible level CR≤0.1 for n≥5. However, the results reveal that these methods are not always effective. Drawing near to admissible inconsistency does not guarantee closeness to the real priority vector of decision alternatives.
A method for constructing consistent fuzzy preference relations from a set of n  1 preference data was proposed by [57]. The authors stated that, with this method, it is possible to ensure better consistency of the fuzzy preference relations provided by the DMs. However, this approach differs from our interactive preference adjustment insofar as our analysis seeks to obtain information from the DMs, whereas the previous approach was mathematical.
Voisin et al. [58] have noted that several consistency indices for fuzzy PC matrices have been proposed within the literature. However, some researchers argue that most of these indices are not axiomatically grounded, which may lead to deviations in the results. The authors of the present paper overcome this lack of an axiomatically grounded index by proposing a new index, referred to as the knowledgebased consistency index.
Benítez et al. [59] proposed the use of a technique that provides the closest consistent matrix, given inconsistent matrices, using an orthogonal projection on a linear space. In another paper [60], the same authors proposed a framework that allows for balancing consistency and DMs’ preferences, focusing specifically on a process of streamlining the tradeoff between reliability and consistency. An algorithm was designed to be easily integrated into a decision support system. This algorithm follows a process of interactive feedback that achieves an acceptable level of consistency relative to the DMs’ preferences.
Benítez et al. [61] also proposed a method for achieving consistency after reviewing the judgment of the comparisons matrix decider using optimization and discovering that it approximated the nearest consistent matrix. This method has the advantage of depending solely on n decision variables (the number of elements being compared), being less expensive than other optimization methods, and being easily implemented in almost any computing environment.
Motivated by a situation found in a real application of AHP, Negahban [62] extends previous work on improving the consistency of positive reciprocal comparative matrices by optimizing its transformation into almost consistent matrices. An optimization approach is proposed and integrated into the Monte Carlo AHP framework, allowing it to solve situations where distinct or almost insufficient matrices are generated through the direct sampling of the original paired comparison distributions—a situation that prohibits significant statistical analysis and effective decisionmaking through the use of the traditional AHP Monte Carlo.
Brunelli et al. [63] found evidence of proportionality between some consistency indices used in the AHP. Having established these equivalences, the authors proposed a redundancy elimination when checking the consistency of the preferences.
Xia and Xu [64] proposed methods to derive interval weight vectors from reciprocal relations to reflect the inconsistency that exists when the DMs’ preferences are taken into account for alternatives (or criteria). The authors presented programming models to minimize the inconsistency based on multiplicative and additive consistency.
Benítez et al. [27], again, proposed a formula that provides, in a very simple manner, the consistent matrix closest to a reciprocal (inconsistent) matrix. Additionally, this formula is computationally efficient, since it only uses sums to perform the calculations. A corollary of the main result reveals that the normalized vector of the vector whose components are the geometric means of the rows of a comparisons matrix only produces the priority vector for consistent matrices.
The evaluation of consistency has also been studied using imprecise data. Thus, the theory of fuzzy numbers has been applied to MCDA methods to better interpret the judgment of DMs. The fuzzy AHP has been widely applied, and the evaluation of consistency in such situations has been the object of research. Bulut et al. [65] proposed a fuzzyAHP generic model (FAHP) with pattern control consistency of the decision matrix for group decisions (GFAHP). The GFAHP improved performance using direct numerical inputs without consulting the decider. In practice, some criteria can easily be calculated. In such cases, consulting with the DM becomes redundant.
Ramík and Korviny [66] presented an inconsistency index for matrices of PC with fuzzy entries based on the Chebyshev distance. However, Brunelli [67] showed that the Chebyshev distance may fail to capture inconsistency and, as a result, should be replaced by the most convenient metric. Liu et al. [68] reported that the study of consistency is very important, since it helps to avoid erroneous recommendations. Accordingly, they proposed a definition of reciprocal relationships that privileged triangular fuzzy numbers that can be used to check fuzzyAHP comparison matrices.
Xu and Wang [69] extended the eigenvector method (EM) to prioritize and define a multiplicative consistency for a list of incomplete fuzzy prioritizations. The authors presented an approach to judge whether this relationship is acceptable or not and subsequently developed a prioritization ratio of consistency to fuzzy incomplete similar to the one proposed by Saaty.
Koczkodaj et al. [49, 51, 54] used incomplete comparison matrices to create a system that monitors the inconsistency of the DM at each step of the PC process, informing the DM if it exceeds a certain threshold of inconsistency. The algorithm is constructed by locating the main reason for the inconsistency. Our research differs from the system proposed, in that it reduces the space in which the DM would be inconsistent by introducing an interactive process where, in the first stage, the DM is asked to complete only a single line of the matrix. Furthermore, with the help of an algorithm, we have captured and corrected deviations from the matrix in relation to the DM’s preferences.
A recent study examined the notion of generators of the PCs matrix [31]. The proposed method decreased the number of PCs from to , which is similar to our approach, except that we use an algorithm that identifies and corrects deviations in the PCs matrix to better capture the DM’s preferences.
3. Checking the Influence of Inconsistency
We have performed an experiment of a decisionmaking situation among students and staffs of a university, using a procedure based on the PCs matrix. The experiment aims to identify the influence of the number of PCs on the comparisons matrix consistency, in accordance with the research conducted by Bozóki et al. [20]. To measure consistency, we use the consistency ratio [11–13].
The experiment consisted of presenting different location options for a summer holiday to DMs. DMs completed a comparisons matrix whose number of alternatives grew as the DM concluded a comparative stage. The alternatives were the cities of Rio de Janeiro, Florianopolis, Salvador, Natal, Fortaleza, Maceió, João Pessoa, Aracaju, Vitória, Recife, Porto Alegre, and Curitiba, as presented in Figure 1.
While the experiment surveyed a sample of 180 people, only 76 answered it completely. Of the 76 who responded in full, only 30 dedicated the answers. The others only completed the questionnaire without any criteria or attention and were very inconsistent, even in the initial stages when the number of alternatives and the cognitive effort were minimal.
The evaluation process began with the comparison of three possible alternatives (cities). Then, 4 alternatives, 5 alternatives, and eventually up to 10 alternatives were evaluated. Individuals reported that, as the number of alternatives increased, their ability to discern the difference between them decreased. Lastly, the surveyed individuals were asked whether they agreed with the ranking obtained; 67% of the individuals reported that they had a different perspective than the presented results.
The results of the experiment indicate that inconsistency in the comparisons matrix increases as the number of alternatives increases, that is, as the comparisons increase (see Figure 2).
The findings indicate that comparisons of 3, 4, and 5 alternatives do not usually generate problems of consistency when considering CR <0.1, as recommended by Saaty, when using PCs. However, matrices with 6 or more alternatives usually generate CR> 0.1, thus extrapolating the limit.
The results further confirm the research by Bozóki et al. [20]. The cognitive effort to complete a comparisons matrix with more than 5 alternatives is significant, and fifteen successive comparisons are repeated as the problem increases the number of criteria. Thus, we propose a solution based on the reduction in the number of comparisons, verifying and correcting inconsistencies through an interactive algorithm.
The results from the previous experiment were used to explore the influence of inconsistency on the PCs Matrix. We selected 10 alternatives (cities), in which we have observed a high rate of inconsistency in all results. The matrix values were randomly changed so as not to reverse the DM’s preferences. We analyzed the inconsistency in each example and then reproduced similar situations of inconsistency for a selected DM.
For example, if the DM compares A_{1} and A_{2}, such as 4, and A_{2} and A_{3}, as 2, to maintain consistency and transitivity, the comparison between A_{1} and A_{3} will be 6. The inconsistency introduced does not change the order A_{1} > A_{2} > A_{3}, however, although it would change the comparison between A_{1} and A_{3} to 5 or 7.
To verify the influence of inconsistencies, we created a consistent matrix formed by the RTM to serve as a reference point for each decisionmaker. Therefore, a consistent comparisons matrix was created using selected responses provided by the decisionmakers, i.e., the minimal necessary information. By including the additional information from the PC that was already provided by the decisionmaker, we were able to generate different levels of inconsistency.
The results of a consistent matrix were compared with five other situations of inconsistency, all within an accepted level. In all situations, the matrices of comparison maintained the first line of evaluation completed by the DMs but, in each case, allowed for a different level of inconsistency (from 0 to 0.1).
The final ranking of the situations was analyzed for a given decisionmaker. The results are presented in Table 2. An evaluation of the alternatives was created without inconsistencies and, keeping the first matrix’s row fixed, four other evaluations were performed with CR = 0.02; 0.04; 0.06; 0.08; and 0.10 (Table 2).

A straightforward verification of the alternatives position in Table 2 shows that the variation of the CR can influence the final results. Considering that CR is equal to 0, 0.01, and 0.02, there is no change in ranking, although there are slight variations in the priority vector. With CR equal to 0.04, 0.06, 0.08, and 0.10, however, changes in ranking occur. This fact has been observed for all DMs, and it is an indication that inconsistencies directly influence the result; this finding is especially important for additive methods.
A similar result has been observed for all DMs who participated in the present experiment. In all cases, at least rank changed positions. This result leads to the conclusion that we cannot rely only on CR to measure the consistency of the comparisons matrix.
Inconsistency can affect the recommendations made to the DM and can even be detrimental, resulting in a wrong decision. For example, in all ranks, Rio de Janeiro was ranked first; however, when CR = 0.08, Rio de Janeiro was tied with Fortaleza. This tie could generate doubt in the DM and cause him to choose Fortaleza. In this example a wrong decision will most likely not have detrimental consequences, but for strategic decisionmaking an error could provoke serious consequences.
4. Pairwise Comparisons with Preference Adjustment
All research related to the consistency of a comparisons matrix stems from the impossibility of ensuring that DMs are consistent with their value judgment when making PCs of the alternatives.
In the previous section, we attempted to show how the inconsistency of DMs increases as the number of alternatives increases, and how this inconsistency influences the overall results. Additionally, inconsistency is influenced by the bias in the process of DM’s elicitation of preferences.
Thus, we suggest a method that seeks to reduce and correct inconsistencies caused by DMs in the qualitative assessment of PCs.
This method is based on two procedures: First, the DM must access the preferences of a set of alternatives for a given criterion by comparing the set of alternatives to a reference alternative. The compassion process only occurs between the reference alternative, for instance, the best alternative, and the set of available alternatives. Thus, only one line of the comparisons matrix needs to be completed. In a second step, the remaining lines will be filled in with the help of the mathematical assumptions of the RTM. This ensures the consistency of the array and reduces the number of comparisons. The procedure uses the bases of the mathematical presuppositions of the RTM to verify acceptable values of inconsistencies. Later, the interactive algorithm will identify and correct the inconsistencies of the DM.
At this point, we assume that the restructuring process has already been undertaken and that the criteria and alternatives have already been identified. Decision makers subsequently sort alternatives from better to worse.
The proposed procedure reduces the number of preference comparisons demanded by the DM, while increasing the accuracy of their preferences. The DM fills in only the first row of the comparison matrices and a few pairs of alternatives of the other lines. This procedure results in a significant reduction of the number of PCs.
The interactive algorithm selects a few pairs of available alternatives to be evaluated, i.e., additional comparisons between the alternatives, to confirm the preference judgment of the DM with an estimated value in situations without inconsistency.
When DMs complete one row of a comparisons matrix, there are no inconsistencies caused by a lack of reciprocity; only deviations from judgment can occur. Nevertheless, the DM may not have been consistent with his or her own preferences. Thus, a preference adjustment in the comparisons matrix may be performed to avoid possible deviations. One way to accomplish this is proposed below.
4.1. Preference Adjustment Algorithm
The preference adjustment is based on questions posed to the DM. These questions review a comparison between two alternatives. These two alternatives should be chosen according to the number of alternatives.
The number of PCs has been presented previously to ensure that the minimum number of required assessments is performed. Only then can there be a good preference adjustment. Indeed, complete consistency can only be ensured if all comparison indices are evaluated. However, evaluating all comparison indices may be too expensive, thereby diminishing the advantage of reducing the number of PCs.
The number of revisions required depends on the number of alternatives (Table 3).(i)For a comparisons matrix with an even number of elements n: Do not conduct assessments with comparison indices between the first row and the first column; there will be an odd number of elements to evaluate. It is not possible to form pairs with an odd number of elements without repetition, so n/2 + 1 evaluations would be needed.(ii)For a comparisons matrix with an odd number of elements n: Again, do not conduct assessments with comparison indices between the first row and the first column; there will be an even number of elements to evaluate. In this case, n/2 evaluations would be needed.
 
Legend: : ith alternative; : comparison index of alternative in relation to alternative ; n: number of alternatives. 
Initially, the preference adjustment may seem costly and the number of evaluations may seem as large, as it is in the traditional PCs Matrix. However, this is not true. The solution to the problem with ten criteria and ten alternatives using RTM would require 145 evaluations (Table 1), 90 PCs, and 55 preference adjustments, whereas in the traditional PCs matrix there would be 450 PCs.
The number of comparison indices is always greater than zero. The comparison indices of the main diagonal may not be evaluated, as each receives a value of 1. The comparison indices of the first row and first column of the comparisons matrix must be completed directly by the DM when he or she makes the PCs, where the first column is a direct result of this comparison.
As the PCs made by the DM are based on an interpretation of Saaty’s fundamental scale, the preference adjustment will also rely on this scale. A standard set of questions based on the fundamental scale of Saaty is proposed in Table 4. The questions are presented only for comparison scores higher than zero.

As previously mentioned, to ensure an intuitive judgment by the DM, it is important to rank the alternatives from best to worst based on the criterion in question. Thus, the DM should only be asked to compare ordered alternatives in such a way that the best alternative is always positioned in the first row.
The assessment procedure by the DM is formalized considering the whole procedure of preference adjustment. A standard algorithm that structures all steps of the analysis was created (Figure 3). The algorithm consists of six operations, which are presented below.(1)Identify and order the comparison indices to be evaluated: Let n be the number of alternatives. To identify the comparison indices the rules below should be followed: The first comparison index to be evaluated is always the one that compares the alternatives A2 and A3; it is . The second comparison index to be evaluated is always the one that compares alternatives A4 and A5. Repeat this procedure until the nth alternative. If n is positive, the last comparison index necessarily represents the comparison between An1 and An; also, An1 does not change, since it has been previously evaluated (Table 3).(2)Loop repeat: After ordering the comparison indices to be evaluated, initiate the evaluation process of ordering until the last index comparison is evaluated, at which point the procedure ends.(3)Verification of range: The beginning of the assessment consists in determining which range from Table 4 will be used to evaluate the comparison index.(4)Question: Ask the DM the questions listed in Table 4.(i) If the answer is yes, go back to step to start the next evaluation.(ii) If the answer is no, proceed to step .(5)Reevaluation of index of the first line: The DM should reevaluate the indices and ; update the comparisons matrix accordingly.(6) Determine if there was a change in the range: Check if the index changed in terms of range, as shown in Table 4. If there was no change in terms of range, consider that the DM was given the opportunity to reassess the comparison index of the first line and that, despite his or her negative response in step , the range did not change enough to change the landscape of the initial evaluation. Return to step to start the next evaluation. If there was a change in terms of range, go back to step to conduct a new evaluation of the same index.
An important issue to be considered is the automatic change in another comparison index when the DM revises and changes an index of the first row. Similarly, it is wise for researchers to consider that some indices of comparison, as assessed by the DM, will be subjected to change. When these changes occur, they must be reevaluated. However, the algorithm solves this problem.
If an index in the first row is changed, the other indexes that depend on it are automatically corrected considering the mathematical assumptions of the RTM. Thus, it will correct the deviations in the DM’s preferences and simultaneously maintain the consistency of the matrix.
As we did not perform evaluations with A1, start with or . If the DM changes the value of and , the underlined comparison indices will change.
Without repeating the lines already evaluated, the DM must evaluate or , which were not altered by previous reevaluation. If the DM changes the value of and , the comparison indices underlined in Table 5 will change. It is important to note that none of the possible indices previously evaluated, or , will have changed.

Finally, no index containing A_{6} was evaluated and there is no alternative to pair with A_{6}. In this case, we must evaluate or , as shown in Table 3. For this situation (even n), the DM can only change the value of , since has previously been reevaluated and any change in would lead to a reassessment of because its value will be changed. If the DM changes the value of , the comparison indices underlined in Table 6 will change. It is important to note that none of the possible indices previously evaluated (, , , and ) will have changed; thus they do not require reassessment.

5. Application of the Procedure
We have applied the procedure for two situations. First, we used the problems explored in the experiment conducted on item 3 (tourist cities). Then, we tested the procedure with 10 DMs to determine the inconsistency and agreement level with the selected alternative.
5.1. Summer Holiday Case
We applied the proposed procedure to the case of selecting the next city to spend the summer holiday. The alternatives were Rio de Janeiro, Florianopolis, Salvador, and Natal, as presented in Figure 1.
The procedure involved one DM and the support of an analyst to run the interactive algorithm. We will assume three criteria with the following weights: W1 = 0.60, W2 = 0.10, and W3 = 0.30, as previously defined.
Details of the procedure are presented with the Tourist Attractions criterion (Tables 7 and 8). For this situation, alternative A1, Rio de Janeiro, is considered the reference alternative. Continuing, the procedure should be performed with the alternative’s comparison matrices considering the three criteria.

 
Bold cell, representing the additional value required by DM. 
Evaluate . Question: Considering the Tourist Attractions criterion, do alternatives A2 and A3 also contribute to the goal? The answer is yes, so the Matrix was not updated.
Evaluate . Question: Considering the Tourist Attractions criterion, do alternatives A5 and A4 also contribute to the goal or do your experience and judgment slightly favor alternative A5 over alternative A4? Are you sure about both situations? The DM says that alternatives A5 and A4 also contribute to the goal, assumes value 1, and we update the comparisons matrix.
change in the range; repeat the question to assess . Ask the DM: Considering the Tourist Attractions criterion, do alternatives A5 and A4 also contribute to the goal or do your experience and judgment slightly favor alternative A2 over alternative A4? The answer is yes, so the Matrix was not updated. At the end of the preference settings, the priority vectors are presented in Table 9.

The results presented in Table 10 indicate that the preference adjustment is an important opportunity for the DM to identify deviations in the comparisons matrix and thus correct them. It was found that the results, with and without adjustment, differ from one another in three ranking positions: Fortaleza moved from third position to fourth, Salvador from fourth to fifth position, and Florianopolis from fifth to third position.

5.2. Additional Experiment
In the experiment, ten DMs of the very inconsistent group were chosen to test the PCs with preference adjustment. The DMs presented CR higher than 0.2, and thus their results were not considered in the experiment.
We begin the experiment reassessing the DM’s preferences for the first line of the matrix comparison. Then we apply the preference adjustment algorithm.
At the end of the process we asked two questions: Do you consider the effort and time spent applying the method to be reasonable? Do you consider that the result represents your preferences? For both questions, a fivepoint scale was used as an option for response: No, not at all; No, not much; More or less; In general, yes; Yes, of course.
For question 1, seven DMs responded, “Yes, for sure,” and three responded, “Usually, yes.” This indicates that effort declined considerably compared to the traditional method.
For question 2, five DMs responded, “Yes, for sure,” three responded, “Usually yes,” and two responded, “More or less.” This indicates that the results can be considered satisfactory in 80% of the cases.
The application of the proposed approach reveals that the effort required by the DMs to evaluate alternatives and to assess their preferences clearly decreased while attempting to maintain an acceptable level of inconsistency.
6. Conclusion
In this study, we demonstrate that the inconsistency of PC matrices, even within acceptable limits, can influence the results of a decision process. Thus, we proposed the following approach: First, the PC comprises only the first row of the matrix, while the other lines are filled in based on the assumptions of the RTM. Second, we use an algorithm to identify and correct deviations in the preferences of the DMs in the matrices.
The process of building a RTM, when applied to the PCs matrix, reduces the DM’s effort in the PCs. The significant reduction in a DM’s effort stems from the building of a RTM, which entails a totally consistent evaluation. Traditionally, PCs allow for certain level of inconsistency: CR less than or equal to 0.10.
The most important issue concerns the measurement of deviations when inconsistencies are allowed. The simulation and the experiments in this work clearly indicate that allowing inconsistencies achieves different results than when using consistent DM evaluations, although this is not always the case. This result does not nullify the value of building the RTM, nor does it discourage the use of a traditional PCs matrix.
Ultimately, the most important aspect is to check the consistency of the evaluation results to confirm that the DM’s priorities are reflected in his or her judgment. If the answer is negative, it may be necessary to reassess. In fact, this process is already included in virtually all MCDA methods, yet in the case of the PCs matrix, for which there are many alternatives, this would make decision making even more difficult. High cognitive efforts generate inconsistencies, in addition to requiring a significant amount of time. Thus, it is important to use tools that reduce cognitive effort and ensure satisfactory results.
The preference adjustment algorithm aims to complement the information provided by the DM. The adjustments approximate the results of the problem to the preferences of the DM. The procedures presented in this paper reduce the cognitive effort of the DM, eliminate inconsistencies in the comparison process, and present a recommendation that reflects the preferences of the DM.
More research is needed, however, to verify whether the results that are overly sensitive to inconsistency are linked to the scale used in the assessment, are caused by small differences between alternatives, or are simply the result of errors caused by the DM’s high cognitive effort. With regard to the application of the proposed algorithm, an important issue is to examine the use of other scales and the consistency index associated with these scales, such as the scale proposed by Koczkodaj [46].
Data Availability
The data included in the study "Exploring Multicriteria Elicitation Model Based on Pairwise Comparison: Building an Interactive Preference Adjustment Algorithm" are the responsibility of authors Giancarllo Ribeiro Vasconcelos and Caroline Maria de Miranda Mota, who collected the data in an open database. There are no restrictions on access to them.
Conflicts of Interest
The authors declare that they have no conflicts of interest.
References
 M. Marttunen, J. Lienert, and V. Belton, “Structuring problems for multicriteria decision analysis in practice: a literature review of method combinations,” European Journal of Operational Research, vol. 263, no. 1, pp. 1–17, 2017. View at: Publisher Site  Google Scholar  MathSciNet
 H. Keeney and L. R. Raiffa, Decisions with Multiple Objectives Preferences and Value Tradeoffs, New York, NY, USA, 1976.
 H. Polatidis, D. A. Haralambopoulos, G. Munda, and R. Vreeker, “Selecting an appropriate multicriteria decision analysis technique for renewable energy planning,” Energy Sources, Part B: Economics, Planning, and Policy, vol. 1, no. 2, pp. 181–193, 2006. View at: Publisher Site  Google Scholar
 J. Figueira, S. Greco, and M. Ehrgott, “Multiple criteria decision analysis: state of the art surveys,” Multiple Criteria Decision Analysis: State of the Art Surveys, vol. 78, no. 1, p. 1045, 2005. View at: Google Scholar
 G. Montibeller and D. von Winterfeldt, “Cognitive and motivational biases in decision and risk analysis,” Risk Analysis, vol. 35, no. 7, pp. 1230–1251, 2015. View at: Publisher Site  Google Scholar
 W. Edwards, “How to use multivariate utility measurement for social decision making,” IEEE Transactions on Systems, Man, and Cybernetics, vol. 7, no. 5, pp. 326–340, 1977. View at: Publisher Site  Google Scholar
 W. Edwards and F. H. Barron, “SMARTS and SMARTER: improved simple methods for multiattribute utility measurement,” Organizational Behavior and Human Decision Processes, vol. 60, no. 3, pp. 306–325, 1994. View at: Publisher Site  Google Scholar
 R. L. Keeney, “Valuefocused thinking: a path to creative decisionmaking,” 1992. View at: Publisher Site  Google Scholar
 R. L. Keeney, “Common mistakes in making value tradeoffs,” Operations Research, vol. 50, no. 6, pp. 935–1077, 2002. View at: Publisher Site  Google Scholar
 A. T. de Almeida, J. A. de Almeida, A. P. C. S. Costa, and A. T. de AlmeidaFilho, “A new method for elicitation of criteria weights in additive models: flexible and interactive tradeoff,” European Journal of Operational Research, vol. 250, no. 1, pp. 179–191, 2016. View at: Publisher Site  Google Scholar  MathSciNet
 T. L. Saaty, “A scaling method for priorities in hierarchical structures,” Journal of Mathematical Psychology, vol. 15, no. 3, pp. 234–281, 1977. View at: Publisher Site  Google Scholar  MathSciNet
 T. L. Saaty, “The analytic hierarchy process in conflict management,” International Journal of Conflict Management, vol. 1, no. 1, pp. 47–68, 1990. View at: Publisher Site  Google Scholar
 T. L. Saaty, “How to make a decision: the analytic hierarchy process,” European Journal of Operational Research, vol. 1, pp. 19–43, 1990. View at: Google Scholar
 C. A. Bana e Costa and J.C. Vansnick, “A critical analysis of the eigenvalue method used to derive priorities in AHP,” European Journal of Operational Research, vol. 187, no. 3, pp. 1422–1428, 2008. View at: Publisher Site  Google Scholar  MathSciNet
 C. A. B. E. Costa, J.M. De Corte, and J.C. Vansnick, “MACBETH,” International Journal of Information Technology & Decision Making, vol. 11, no. 2, pp. 359–387, 2012. View at: Publisher Site  Google Scholar
 A. F. Wadjdi, E. M. Sianturi, and N. Ruslinawaty, “Design of data collection form to ensure consistency in AHP,” in Proceedings of the 2018 10th International Conference on Information Technology and Electrical Engineering (ICITEE), pp. 529–533, Kuta, Indonesia, July 2018. View at: Publisher Site  Google Scholar
 L. Wu, X. Cui, and R. Dai, “Judgment number reduction: an issue in the analytic hierarchy process,” International Journal of Information Technology & Decision Making, vol. 9, no. 1, pp. 175–189, 2010. View at: Publisher Site  Google Scholar
 R. R. Tan and M. A. B. Promentilla, “A methodology for augmenting sparse pairwise comparison matrices in AHP: Applications to energy systems,” Clean Technologies and Environmental Policy, vol. 15, no. 4, pp. 713–719, 2013. View at: Publisher Site  Google Scholar
 H. Raharjo and D. Endah, “Evaluating relationship of consistency ratio and number of alternatives on rank reversal in the AHP,” Quality Engineering, vol. 18, no. 1, pp. 39–46, 2006. View at: Publisher Site  Google Scholar
 S. Bozóki, L. Dezső, A. Poesz, and J. Temesi, “Analysis of pairwise comparison matrices: an empirical research,” Annals of Operations Research, vol. 211, no. 1, pp. 511–528, 2013. View at: Publisher Site  Google Scholar  MathSciNet
 D. Ergu and G. Kou, “Questionnaire design improvement and missing item scores estimation for rapid and efficient decision making,” Annals of Operations Research, vol. 197, pp. 5–23, 2012. View at: Publisher Site  Google Scholar  MathSciNet
 L. Huo, J. Lan, and Z. Wang, “New parametric prioritization methods for an analytical hierarchy process based on a pairwise comparison matrix,” Mathematical and Computer Modelling, vol. 54, no. 1112, pp. 2736–2749, 2011. View at: Publisher Site  Google Scholar  MathSciNet
 H. Hou and H. Wu, “What influence domestic and overseas developers’ decisions?” Journal of Property Investment & Finance, vol. 37, no. 2, pp. 153–171, 2019. View at: Publisher Site  Google Scholar
 W. Gaul and D. Gastes, “A note on consistency improvements of AHP paired comparison data,” Advances in Data Analysis and Classification. ADAC, vol. 6, no. 4, pp. 289–302, 2012. View at: Publisher Site  Google Scholar  MathSciNet
 C. Lin, G. Kou, and D. Ergu, “An improved statistical approach for consistency test in AHP,” Annals of Operations Research, vol. 211, no. 1, pp. 289–299, 2013. View at: Publisher Site  Google Scholar  MathSciNet
 S.W. Lin and M.T. Lu, “Characterizing disagreement and inconsistency in experts' judgments in the analytic hierarchy process,” Management Decision, vol. 50, no. 7, pp. 1252–1265, 2012. View at: Publisher Site  Google Scholar
 J. Benítez, J. Izquierdo, R. PérezGarcía, and E. RamosMartínez, “A simple formula to find the closest consistent matrix to a reciprocal matrix,” Applied Mathematical Modelling: Simulation and Computation for Engineering and Environmental Systems, vol. 38, no. 1516, pp. 3968–3974, 2014. View at: Publisher Site  Google Scholar  MathSciNet
 X. Su, “Bounded rationality in newsvendor models,” Manufacturing & Service Operations Management, vol. 10, no. 4, pp. 566–589, 2008. View at: Publisher Site  Google Scholar
 T. Toma and M. R. Asharif, AHP Coefficients Optimization Technique Based on GA, Beppu, Japan, 2003.
 M. Brunelli, L. Canal, and M. Fedrizzi, “Inconsistency indices for pairwise comparison matrices: a numerical study,” Annals of Operations Research, vol. 211, pp. 493–509, 2013. View at: Publisher Site  Google Scholar  MathSciNet
 W. W. Koczkodaj and J. Szybowski, “Pairwise comparisons simplified,” Applied Mathematics and Computation, vol. 253, pp. 387–394, 2015. View at: Publisher Site  Google Scholar  MathSciNet
 E. N. Weiss and V. R. Rao, “AHP design issues for largescale systems,” Decision Sciences, vol. 18, no. 1, pp. 43–61, 1987. View at: Publisher Site  Google Scholar
 P. T. Harker, “Incomplete pairwise comparisons in the analytic hierarchy process,” Applied Mathematical Modelling: Simulation and Computation for Engineering and Environmental Systems, vol. 9, no. 11, pp. 837–848, 1987. View at: Publisher Site  Google Scholar  MathSciNet
 B. Golany and M. Kress, “A multicriteria evaluation of methods for obtaining weights from ratioscale matrices,” European Journal of Operational Research, vol. 69, no. 2, pp. 210–220, 1993. View at: Publisher Site  Google Scholar
 M. Kwiesielewicz and E. van Uden, “Inconsistent and contradictory judgements in pairwise comparison method in the AHP,” Computers & Operations Research, vol. 31, no. 5, pp. 713–719, 2004. View at: Publisher Site  Google Scholar
 S. Bozóki, J. Fülöp, and W. W. Koczkodaj, “An LPbased inconsistency monitoring of pairwise comparison matrices,” Mathematical and Computer Modelling, vol. 54, no. 12, pp. 789–793, 2011. View at: Publisher Site  Google Scholar  MathSciNet
 J. Reichert, “On the complexity of counter reachability games,” Fundamenta Informaticae, vol. 143, no. 34, pp. 415–436, 2016. View at: Publisher Site  Google Scholar  MathSciNet
 K. D. Goepel, “Comparison of judgment scales of the analytical hierarchy process — a new approach,” International Journal of Information Technology & Decision Making, vol. 18, no. 2, pp. 445–463, 2019. View at: Publisher Site  Google Scholar
 A. A. Salo and R. P. Hämäläinen, “On the measurement of preferences in the analytic hierarchy process,” Journal of Multi‐Criteria Decision Analysis, vol. 6, no. 6, pp. 309–319, 1997. View at: Publisher Site  Google Scholar
 H. A. Donegan, F. J. Dodd, and T. B. McMaster, “A new approach to AHP decisionmaking,” The American Statistician, vol. 41, no. 3, p. 295, 2006. View at: Publisher Site  Google Scholar
 F. A. Lootsma, “Scale sensitivity in the multiplicative AHP and SMART,” Journal of Multi‐Criteria Decision Analysis, vol. 2, no. 2, pp. 87–110, 1993. View at: Publisher Site  Google Scholar
 W. W. Koczkodaj, L. Mikhailov, G. Redlarski et al., “Important facts and observations about pairwise comparisons (the special issue edition),” Fundamenta Informaticae, vol. 144, no. 34, pp. 291–307, 2016. View at: Publisher Site  Google Scholar  MathSciNet
 T. L. Saaty, “On the measurement of intangibles. A principal eigenvector approach to relative measurement derived from paired comparisons,” Notices, vol. 60, no. 2, pp. 192–208, 2013. View at: Publisher Site  Google Scholar  MathSciNet
 R. J. Hill, “A note on inconsistency in paired comparison judgments,” American Sociological Review, vol. 18, no. 5, p. 564, 2006. View at: Publisher Site  Google Scholar
 J. Barzilai, “Deriving weights from pairwise comparison matrices,” Journal of the Operational Research Society, vol. 48, no. 12, pp. 1226–1232, 1997. View at: Publisher Site  Google Scholar
 S. Bozóki and T. Rapcsák, “On Saaty's and Koczkodaj's inconsistencies of pairwise comparison matrices,” Journal of Global Optimization, vol. 42, no. 2, pp. 157–175, 2008. View at: Publisher Site  Google Scholar  MathSciNet
 W. W. Koczkodaj, “A new definition of consistency of pairwise comparisons,” Mathematical and Computer Modelling, vol. 18, no. 7, pp. 79–84, 1993. View at: Publisher Site  Google Scholar
 M. W. Herman and W. W. Koczkodaj, “APL2 implementation of a new definition of consistency of pairwise comparisons,” ACM SIGAPL APL Quote Quad, vol. 24, no. 4, pp. 37–40, 2007. View at: Publisher Site  Google Scholar
 W. W. Koczkodaj, M. W. Herman, and M. Orłowski, “Using consistencydriven pairwise comparisons in knowledgebased systems,” in Proceedings of the Sixth International Conference on Information and Knowledge Management, pp. 91–96, Las Vegas, Nv, USA, November 1997. View at: Publisher Site  Google Scholar
 E. Dopazo and J. GonzálezPachón, “Consistencydriven approximation of a pairwise comparison matrix,” Kybernetika, vol. 39, no. 5, pp. 561–568, 2003. View at: Google Scholar  MathSciNet
 W. W. Koczkodaj, K. Kułakowski, and A. Ligeza, “On the quality evaluation of scientific entities in Poland supported by consistencydriven pairwise comparisons method,” Scientometrics, vol. 99, no. 3, pp. 911–926, 2014. View at: Publisher Site  Google Scholar
 W. W. Koczkodaj and J. Szybowski, “The limit of inconsistency reduction in pairwise comparisons,” International Journal of Applied Mathematics and Computer Science, vol. 26, no. 3, pp. 721–729, 2016. View at: Publisher Site  Google Scholar  MathSciNet
 R. Janicki and W. W. Koczkodaj, “A weak order solution to a group ranking and consistencydriven pairwise comparisons,” Applied Mathematics and Computation, vol. 94, no. 23, pp. 227–241, 1998. View at: Publisher Site  Google Scholar  MathSciNet
 W. W. Koczkodaj, P. Dymora, M. Mazurek, and D. Strzałka, “Consistencydriven pairwise comparisons approach to software product management and quality measurement,” in Contemporary Complex Systems and Their Dependability, vol. 761 of Advances in Intelligent Systems and Computing, pp. 292–305, Springer International Publishing, Cham, Swizerland, 2019. View at: Publisher Site  Google Scholar
 Z. S. Xu, “Goal programming models for obtaining the priority vector of incomplete fuzzy preference relation,” International Journal of Approximate Reasoning, vol. 36, no. 3, pp. 261–270, 2004. View at: Publisher Site  Google Scholar  MathSciNet
 N. Pankratova and N. Nedashkovskaya, “Methods of evaluation and improvement of consistency of expert pairwise comparison judgments,” International Journal “Information Theories and Applications”, vol. 22, no. 3, pp. 203–223, 2015. View at: Google Scholar
 E. HerreraViedma, F. Herrera, F. Chiclana, and M. Luque, “Some issues on consistency of fuzzy preference relations,” European Journal of Operational Research, vol. 154, no. 1, pp. 98–109, 2004. View at: Publisher Site  Google Scholar  MathSciNet
 A. Voisin, J. Robert, E. H. Viedma, Y. Le Traon, W. Derigent, and S. Kubler, “Measuring inconsistency and deriving priorities from fuzzy pairwise comparison matrices using the knowledgebased consistency index,” KnowledgeBased Systems, vol. 162, pp. 147–160, 2018. View at: Google Scholar
 X. Benítez, X. DelgadoGalván, J. Izquierdo, and R. PérezGarcía, “Achieving matrix consistency in AHP through linearization,” Applied Mathematical Modelling: Simulation and Computation for Engineering and Environmental Systems, vol. 35, no. 9, pp. 4449–4457, 2011. View at: Publisher Site  Google Scholar  MathSciNet
 J. Benítez, X. DelgadoGalván, J. A. Gutiérrez, and J. Izquierdo, “Balancing consistency and expert judgment in AHP,” Mathematical and Computer Modelling, vol. 54, no. 78, pp. 1785–1790, 2011. View at: Publisher Site  Google Scholar  MathSciNet
 J. Benítez, X. DelgadoGalván, J. Izquierdo, and R. PérezGarcía, “Improving consistency in AHP decisionmaking processes,” Applied Mathematics and Computation, vol. 219, no. 5, pp. 2432–2441, 2012. View at: Publisher Site  Google Scholar  MathSciNet
 A. Negahban, “Optimizing consistency improvement of positive reciprocal matrices with implications for Monte carlo analytic hierarchy process,” Computers & Industrial Engineering, vol. 124, pp. 113–124, 2018. View at: Publisher Site  Google Scholar
 M. Brunelli, A. Critch, and M. Fedrizzi, “A note on the proportionality between some consistency indices in the AHP,” Applied Mathematics and Computation, vol. 219, no. 14, pp. 7901–7906, 2013. View at: Publisher Site  Google Scholar  MathSciNet
 M. Xia and Z. Xu, “Interval weight generation approaches for reciprocal relations,” Applied Mathematical Modelling, vol. 38, no. 3, pp. 828–838, 2014. View at: Publisher Site  Google Scholar  MathSciNet
 E. Bulut, O. Duru, T. Keçeci, and S. Yoshida, “Use of consistency index, expert prioritization and direct numerical inputs for generic fuzzyAHP modeling: a process model for shipping asset management,” Expert Systems with Applications, vol. 39, no. 2, pp. 1911–1923, 2012. View at: Publisher Site  Google Scholar
 J. Ramík and P. Korviny, “Inconsistency of pairwise comparison matrix with fuzzy elements based on geometric mean,” Fuzzy Sets and Systems, vol. 161, no. 11, pp. 1604–1613, 2010. View at: Publisher Site  Google Scholar  MathSciNet
 M. Brunelli, “A note on the article “Inconsistency of pairwise comparison matrix with fuzzy elements based on geometric mean'' [Fuzzy Sets and Systems, vol. 161, pp. 16041613, 2010],” Fuzzy Sets and Systems, vol. 176, no. 1, pp. 76–78, 2011. View at: Publisher Site  Google Scholar  MathSciNet
 F. Liu, W. G. Zhang, and L. H. Zhang, “Consistency analysis of triangular fuzzy reciprocal preference relations,” European Journal of Operational Research, vol. 235, no. 3, pp. 718–726, 2014. View at: Publisher Site  Google Scholar  MathSciNet
 Y. Xu and H. Wang, “Eigenvector method, consistency test and inconsistency repairing for an incomplete fuzzy preference relation,” Applied Mathematical Modelling, vol. 37, no. 7, pp. 5171–5183, 2013. View at: Publisher Site  Google Scholar  MathSciNet
Copyright
Copyright © 2019 Giancarllo Ribeiro Vasconcelos and Caroline Maria de Miranda Mota. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.