Abstract

36 years ago, Thomas Saaty introduced a new mathematical methodology, called Analytic Hierarchy Process (AHP), regarding the decision-making processes. The methodology was widely applied by Saaty and by other authors in the different human activity areas, like planning, business, education, healthcare, etc. but, in general, in the area of management. In this paper, we provide two new proofs for well-known statement that the maximal eigenvalue is equal to for the eigenvector problem , where is, so-called, the consistent matrix of pairwise comparisons of type ( 2) with the solution vector that represents the probability components of disjoint events. Moreover, we suggest an algorithm for the determination of the eigenvalue problem solution as well as the corresponding flowchart. The algorithm for arbitrary consistent matrix can be simply programmed and used.

1. Introduction

Literature regarding the Analytic Hierarchy Process is rather extensive. The well-known database Current Contents Connect provides a growing number of records in all document types (articles, books, reports, reviews, etc.) regarding the acronym AHP in a publication title. For the last 15 years, the number of records can be seen in Figure 1. These publications are mainly focused on different AHP applications, for example, a brown coal deposit [1], a comparative analysis of group aggregation techniques [2], a lab fire prevention management system [3], a green vendor evaluation and selection in production outsourcing in mining industry [4], an approximation of risk assessment [5], an evaluation of healthcare equipment [6], or many others.

Recently, the mathematical principles of AHP were published by Saaty [7]. This monograph involves a lot of mathematical findings that were collected in the area of decision-making processes, particularly, in the area of the AHP, starting from first publication [8] in 1980.

In this paper, we add two proofs regarding the main statement in AHP theory as well as a corresponding algorithm for practical use. Neither proofs nor algorithm has been involved in [7].

Let us consider a consistent matrix , in the formwhere are given positive constants, known in AHP theory as priorities. Let the column vector reflect the probabilities of disjoint events so that the space of elementary events is given as a union of such events , which are disjoint (, ). Elementary events represent different criteria or alternatives in the AHP scheme and probabilities are known as the weights. As usual, it must be requiredfor probability components . Product is a column vector whose components are sums of many addends if is sufficiently large natural number. Therefore, it is better to find a real number such that the simpler product is equal to the complex product . Hence, we have the equation . Certainly, such equation has the trivial solution for arbitrary real number but this is not interesting for us because this zero solution does not satisfy condition (2). If somebody is looking for nontrivial solutions , then the system , rewritten into the form ( must have a singular matrix , i.e., determinant . Here, marks the unit matrix.

2. Derivation

Now, we will demonstrate a way for how to derive the determinant for arbitrary consistent matrix of type .

By the calculation of determinant [911], resp. by means of Laplace expansion the validity of following formula can be evaluated

The cases and are considered as special using fundamental rules like Sarrus. Next, we present two different ways for how to prove statement (3), which are involved neither in [7, 12] nor in other works known to the authors of this paper. The first proof is done by the direct calculation of the determinant on the left hand side of statement (3). The second proof is based on the mathematical induction method.

3. Proof by Direct Calculation of Determinant

At the beginning, we arrange the determinant of (3). We choose the factors from the second, third, fourth, up till -th row. Thus, we getThen, the factors are set from the second, third, fourth, up till -th column of the last determinant. We continuously obtainBy this way, we have arranged the determinant of (3) to the simple determinantof type , where .

Next, we will express such determinant. By elimination in the first column and by selection of the common factor from corresponding rows, we will getThe similar elimination in the second, third, fourth up till -th column provides sequentiallyIf we realize that the last determinant is equal to product of all diagonal elements and using elementary operations, the final result can be expressedwhich is the same result as the right hand side of statement (3), provided that .

4. Proof by Mathematical Induction Method

The equation (3) is valid for . Next, we suppose that statement (3) holds for arbitrary natural and it will be proved that the statement is valid also for the next natural, i.e., n + 1. So, we will prove thatBy a similar way to that done in the beginning of Section 3, one can choose the factors from the second, third, fourth, up till (+1)-th row and one can choose the factors from the second, third, fourth, up till (+1)-th column of the determinant in (10). Thus, we get the determinantof rows and columns. Next, we do the standard Laplace expansion, see, e.g. [10, 11] according to the last -th row, so thatNow, we arrange all determinants of type so that the row of units is the first row. This can be achieved by switching adjacent rows around, which operation leads to a sign change of the determinant. Thus, we haveThe multiple common determinant (with units in the first row) can be chosen after bracket and due to this, we obtainWe arrange the sum near the first determinant in the formThen, the second determinant is substituted according to the induction assumption (3) which results inWe make the following arrangement of the last determinant. The first row of units from the rest of the rows is subtracted. Thus, the process continues intoFinally, the last determinantal operation is the expansion according to last column. Then, we getorprovided . It is proven that the determinant on the left hand side of (10) is equal to the expression on its right hand side. Thus, statement (3) holds for arbitrary natural ) as it results from mathematical induction methodology.

5. Main Statement

Both the proof by direct calculation of determinant in Section 3 and the proof by mathematical induction method in Section 4 lead to the same result; namely, the determinant is equal to for arbitrary consistent matrix with arbitrary natural .

If somebody looks for the eigenvalues of the consistent matrix with arbitrary natural , then the equation holds, or . From here, we have the following main theorem.

Theorem 1. The arbitrary consistent matrix with arbitrary natural has the following eigenvalues , but .
Hence, the maximal eigenvalue is . The reason why we are not interested in the other eigenvalues is simple: the components of eigenvectors for the other eigenvalues do not fulfil the conditions , . We demonstrate this by the next simple example.

Example 1. Let us consider the following consistent matrixof type 3 × 3. According to Theorem 1, the matrix has the following eigenvalues: and Determine the eigenvectors of nonnegative components for eigenvalues and .

Let . Then, the matrix equalsand the system can be solved by the elimination method in the wayHence, we have one equation / with three unknowns: , , and . Moreover, the solution must fulfil condition (2) that means . So, we solve two equations with three unknowns: , , and . One unknown, let us say , is a free one, and after substitution, one can get . This means if then which is not feasible.

On the other hand, . Then, the system ( by the same elimination providesand including condition (2) one can get the unique solution w = , where all components are nonnegative. Thus, the result is evident.

6. Algorithm for Determination of Components

In this section, we will demonstrate an algorithm which determines all components , , of the solution if a consistent matrix A of type is given by arbitrary natural n (n 2); see (1). As we have seen in Section 4, it must be solved by ( with ; otherwise some components can be negative and therefore these components do not represent probabilities of any event. In this case, the system ( has the explicit formwhich is arranged by the following way. First, we multiply the second, third, up to n-th row by . The following is obtainedSecond, we rewrite the last system in the formto get a simpler matrix. Then, we use the elimination method asWe can see that the last row is zero which is evident due to determinant with the eigenvalue being equal to zero. Thus, there exist many eigenvectors that satisfy (24). In this set, we choose only one eigenvector satisfying condition (2). This can be achieved by means of the substitution of zero row in the last system arrangement with the following row. Condition (2) can be written in the matrix formhowever, our consideration is related to the unknownin the last system arrangement. So, condition (2) must be rewritten asand due to this, we will have the systemfor unknown vector (29).

A core of the suggested algorithm consists in the elimination of the last row of the system (31). This elimination can be processed by the following steps.

Step 1. The first row in (31) is multiplied by fraction and the multiplied row is added to the last row. We denote the fraction by . Due to this operation, the last modified row has the first element zero and the second element is

Step 2. The second row in (31) is multiplied by fraction and the multiplied row is added to the last modified row. We denote the fraction by . Due to this operation, the new modified row has the r-st two elements as zeros and the third element is .

Etc.

Step . The ()-st row in (31) is multiplied by fraction and the multiplied row is added to the last modified row. We denote the fraction by c1. Due to such operation, the new modified row has the first () elements as zeros and the last element is . A key point of elimination process is to get system (31) into the formwhereas it follows from the elimination process. Thus, we come to Algorithm 1.

Input: ()→ given priorities
Output:→ unknown eigenvector components
  ; → initial assignment
  while  do→ if = 2 then jump to step
for each    do→ cycle for index k
  ; ,
end
  end
  → relation (33)
  ; → last unknown component
  for each    do→ cycle for index k
(10) ;
  end

For programming purposes, however, the flowchart in Figure 2 is more suitable than the algorithm individual steps.

However, there are more ways for how to calculate the eigenvector components. In Sections 2 and 3 of this paper, we provided two proofs for well-known statement that is equal to for the eigenvector problem . Thus, the system ( can be directly solved with , (, where the components of vector represent probabilities of any event. There is no need to calculate because . Algorithm 1 helps to calculate components directly using positive constants from the first row of consistent pairwise matrix , .

Example 2. We have the following consistent matrixof type 6 × 6, which is pairwise matrix of quantitative criterion. We need to solve the system ( In this example, we exceptionally use, instead of standard mathematical tools, modeling in MS Excel, specifically using its additional tool Solver. We obtain vector .

The constants = 2.414, = 3.166, = 4.138, = 4.138, and = 5.379 are used in the algorithm described in Section 6. The result is . We can see that using such algorithm it gives the same results.

7. Conclusion

This paper has a mathematical methodological character utilizing only basic matrix theory. It shows two ways for how to prove the well-known statement regarding the set of eigenvalues of with a consistent matrix A of type , and it demonstrates how to find the corresponding eigenvector components presented as some probabilities of events for maximal eigenvalue .

The derived and suggested algorithm can be easily programmed in different languages (C++, C, FORTRAN, MATHEMATICA, etc.) and can be used in AHP methodology for determination of weights (probabilities) when comparing the criteria to a goal or when comparing the alternatives to an individual criterion. During such comparison, it is important to make sure that the criteria or alternatives present a set of disjoint events ( = , ). If some events, say and , are such that , , then condition (2) does not hold and the calculated weights do not reflect any real examined problem. In such a case, we suggest considering the intersection as a new event, say , and incorporating this new event into the previous set of events. Next, we note that the statement and algorithm suggested in the paper hold for arbitrary natural . This enables formulating and solving the relatively complex AHP problems with large number of criteria and huge number of alternatives like those in [16].

During practical evaluation of AHP, it can happen that the matrix is not consistent but is close to a consistent matrix . In this case, certainly, (A) differs from () and the corresponding difference is measured by means of, so-called, consistency indexthat was introduced by Saaty in [8]. If the consistency index is close to zero, then the suggested algorithm can be applied, but condition (2) holds only approximately. The usual problem of how to obtain for a nonconsistent general matrix A can be solved by means of the software packages which provide eigenvalues for the general full matrices. Such access, however, does not suppose any special matrix structure, like AHP matrix (1), and the eigenvalue software calculations in this general case are computationally difficult, especially when is large natural number. The special structure nonconsistent matrices which are close to AHP matrix are not considered in this paper and can be analyzed in the future research.

It can be said that there are some methods to repair consistency of matrices. Xu [13] defined a criterion to find unusual and false element and proposed method based on finding such elements in matrix and repair it. The repaired matrix has (after calculating new element instead of unusual one) an acceptable consistency. Saaty [12] proposed method based on additive perturbation using , , where is an eigenvector, and focused their attention on element of the matrix which provokes inconsistency. The consistency is generally more acceptable by substituting this element.

Conflicts of Interest

The authors declare that they have no conflicts of interest.

Acknowledgments

The KEGA-037PU-4/2014 and Grant Agency of Academic Alliance GA/13/2018 research supports are gratefully acknowledged.