About this Journal Submit a Manuscript Table of Contents
Mathematical Problems in Engineering
Volume 2015 (2015), Article ID 530615, 11 pages
Research Article

PROMETHEE-ROC Model for Assessing the Readiness of Technology for Generating Energy

1Management Engineering Department, Universidade Federal de Pernambuco, P.O. Box 7462, 50722-970 Recife, PE, Brazil
2CGEE Centro de Gestão e Estudos Estratégicos, Brasilia, Brazil

Received 30 November 2014; Revised 9 February 2015; Accepted 10 February 2015

Academic Editor: Wei-Chiang Hong

Copyright © 2015 Danielle Costa Morais et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.


This paper puts forward a proposal for a multicriteria decision model for prioritizing technologies that are critical for power generation in the energy sector. It deals with the context of imprecise information regarding importance of criteria; then an integration of surrogate weights with the PROMETHEE method is undertaken in order to approach this context. In this type of strategic decision problem, how to deal with imprecise information is always a challenge. The use of surrogate weights presents a significant contribution and it can facilitate the assignment of weights in a decision ranking problem, which requires the decision-maker (DM) to order the criteria by their importance for the decision problem. Thus for this situation of assessing the readiness of technology for generating energy where the DM is able and feels comfortable to order all criteria by their relative importance, the proposed approach of surrogate weights in the PROMETHEE II method, the PROMETHEE-ROC model, is shown to be an adequate approach.

1. Introduction

In order to analyze the strategic problem regarding the evaluation of technologies for the Brazilian energy matrix, this study presents a model for prioritizing the critical technologies of the energy sector, in order to support the decision makers in choosing the technology to be implemented in the sector with greater efficiency. Nevertheless, it should be observed that investments in this area are huge and an appropriate multicriteria model is necessary in order to ensure adequate efficiency in decision-making on which technology should be encouraged.

For this kind of strategic problem, some objectives should be established in order to analyze such alternatives, and some parameters, such as the relative importance among them, should be defined. In this case, a multicriteria analysis can be useful to compare the alternatives among technologies regarding different criteria, which are important enough to disallow any kind of compensation. In this perspective, this kind of decision problem has a noncompensatory rationality. However, for noncompensatory rationality the decision-maker should be capable of giving weights for criteria that represent their relative importance, and this task can be very hard.

Therefore, there are some cases in which the DM is neither able to provide such information nor feels comfortable about doing so but they may be able to rank the criteria by their importance. Having obtained such a ranking from the DM, the use of surrogate weights can be considered. Therefore, it is proposed that surrogate weights could be used with the PROMETHEE method. No studies were found in the literature studies regarding the surrogate weights with PROMETHEE methods. As to imprecise information, this was considered in PROMETHEE VI [1] by taking into consideration a range of weights from the DM.

Many multicriteria methods applications in different contexts were found in the literature [210]. Behzadian et al. [2] analyzed 217 contributions regarding PROMETHEE methods. These included their use in project portfolios [3, 4], maintenance [5], and water management [69]. But an integration of ROC (rank order centroid) weights, one of the approaches for surrogate weights, with PROMETHEE methods was not found in the literature.

Therefore, the aim of this study is to propose a model for technology readiness assessment for generating energy using a PROMETHEE method with surrogate weights. A real application was conducted regarding this strategic problem in order to support the decision made and to have an appropriate assessment on which technology should be recommended for implementation.

The paper is structured as follows: Section 2 presents a literature review regarding studies developed for technology readiness assessment for generating energy; Section 3 presents PROMETHEE-ROC model proposed, followed by a description of its application in Section 4; Section 5 makes some concluding remarks.

2. Readiness Assessment for Generating Energy

According to Veraszto et al. [11], technology can be understood as the reasoning of know-how and is designed based on new demands and social requirements, thereby changing a whole set of morals and values and ends up being aggregating to the culture.

Having identified a specific area of interest, some technologies may be selected as priority or critical for an organization or a country. Critical technology or technology readiness can be understood as technology, the domain of which will generate economic development and will no longer need to be supplied from outside the country [12]. For this reason, critical technology is a top priority in planning within an organization or a country.

Technology readiness assessment (TRA) indicates the strategic condition of technology and is established using methods and processes that evaluate the technology itself and by specific metrics that verify the status of its development, that is, that measure the maturity of the technology assessed [13].

According to Mankins [14], an effective TRA should also incorporate some metrics that provide a consistent assessment of the “degree of risk” when developing a new technology. The main aspects of an effective TRA include the following.(i)Performance Objectives. They aim to clearly understand the performance objectives of the new technologies and/or system capacity, including engineering aspects and operational performance measures.(ii)Technology Readiness Level (TRL). This is a concept that was introduced by the National Aeronautics and Space Administration (NASA), in the mid-1970s to enable the maturity of new technologies to be assessed more effectively. The TRL is a metric that evaluates the maturity level of a specific technology and consists of a 9-level scale, TRL1 being at the lowest level of maturity and TRL9 at the highest. In 1995, the TRL scale was enhanced by introducing the first detailed definitions of each level, including examples. Since then, TRLs have been adopted by the US Congress’ General Accountability Office (GAO) and the US Department of Defense (DoD) [15], and they are being considered for use in several other organizations.(iii)Degree of Difficulty of Research and Development. It is important during the formal TRA to develop a clear understanding of the obstacles to be faced and the uncertainty related to whether the new technologies can be developed successfully.

Wei-Gang et al. [16] stated that the TRL scale is simple and easy to operate and has been applied in many fields such as aeronautics, astronautics, and energy resources. However, this tool also contains some weaknesses, especially because it depends on the qualitative assessment, which is subject to the professional knowledge of experts, whose assessment is prone to high subjectivity and low objectivity.

Therefore, some authors have conducted studies that contribute to improving TRA in order to evaluate different critical technologies.

Following that perspective, some authors elaborate studies to contribute for improving the technology readiness assessment in order to evaluate different critical technologies.

Chen et al. [17] proposed a method for quantitative analysis of the TRA for weaponry in the development engineering phase. They based the TRA index on the objective and quantifiable characteristics of the engineering key performance parameters (KEPP) and calculated the performance risk of the technology in the light of its degree of difficulty.

Wei-Gang et al. [16] proposed TRA based on a multilevel reference condition (RC). The authors established the RC scale, with 6 layers, by selecting the characteristic parameter of the reference condition and, subsequently, conducted a statistical analysis in accordance with the purpose of the technology research and development related to its systematic historical experience. The RC is the condition in which the technology is being experienced; it is just a research hypothesis, if there are already laboratory tests or even if there are already simulations in high fidelity environments.

Hoffmann et al. [18] proposed a methodology for decision making under uncertainty, applied to the evaluation of chemical process technology, focused on identifying potential environmental problems as early as possible in the design process in order to avoid changes late in the process. Thus, the model proposed aims to select promising alternative processes taking into account the uncertainties, based on the Monte-Carlo simulation. The model is illustrated with a case study on selecting a process for producing hydrocyanic acid in which more than 400 variables of uncertainty are dealt with.

Li and Zhu [19] presented a case study focusing on the evaluation of object technology software in a computer service industry. The decision model was developed to give advice on designs of object-oriented software. The assessment uses a quantitative approach based on a model of mixed integer linear programming and a multiobjective model, in order to reduce subjectivity and thus to lead a consistent selection tool. The authors state that this approach increases customer orientation as it allows users to specify their needs and goals and provide a sensitivity analysis of the results.

Demirkiran and Altunok [20] put forward a systematic model to be used as a guide for system planners and architects in a multicriteria selection technology process. The model also proposes a method to define the elements of a system for critical technology. The model consists of 5 steps, namely, analyzing requirements; identifying alternatives; evaluating alternatives; identifying critical technologies, and selecting alternatives. The weights of the impact matrix of system elements are calculated for each requirement by applying the analytical hierarchy process (AHP). The author illustrates the model with an application in a case study on communication systems with an unmanned aerial vehicle (UAV).

Goetghebeur et al. [21] developed a model to support decisions on health technology assessment (HTA) so as to evaluate 10 drugs from six therapeutic areas. The criteria used were the impact in the context of intervention in the disease, the results of intervention, the type of benefits, the economic criteria, and the quality of the evidence obtained. The results from the model provide a means of capturing the nonquantifiable considerations that may affect the overall rating.

Thokala and Duenas [22] conducted a study about the possibility of applying multicriteria decision analysis (MCDA) to health technology assessment. The authors considered that applying MCDA so as to analyze these technologies can support DMs, taking into account that several criteria explicitly become more structured and the decision more transparent, thereby enabling alternatives to be evaluated by criteria clearly. The authors commented that several pharmaceutical manufacturers recommended the use of MCDA but acknowledged that each method has its own characteristics and more research is needed before implementing a method for the process of health technology assessment.

In this perspective, this study aims to develop a multicriteria decision model to evaluate critical technology for generating energy in order to take the DM’s multiple objectives into account and thereby analyzing which technology for power generation should be recommended. Figure 1 shows where the multicriteria model is placed in the TRA for energy.

Figure 1: Flowchart of the TRA and position of the multicriteria decision evaluation.

As can be seen in Figure 1, the TRA for energy consists of two phases: diagnostic and implementation. In the diagnostic phase, a search is made of new technologies that could be developed. Thereafter, the maturity level of that technology is analyzed using TRL metrics. When a specific technology is chosen then a new search on a subset of critical technologies is conducted before finally beginning the implementation phase. Bearing this in mind, the multicriteria decision model can be applied in two different phases, either when evaluating new technologies in the diagnostic phase or in the prospection phase when evaluating the subset of critical technologies that are chosen for implementation. Nevertheless, this study focuses on the first and more strategic part of the TRA for generating energy in order to aid the evaluation of new technologies.

3. Multicriteria Decision Model for TRA for Generating Energy

The main benefit offered by the multicriteria decision model to evaluate critical technology for generating energy is that of obtaining knowledge about the performance of critical technologies in accordance with criteria, which are defined by DM and depend on the level of detail given in the assessment process.

Multicriteria decision aid models are useful to represent real-life problems as they demonstrate the interaction of several contextual aspects. In addition, multicriteria methods propose a mathematical structure to help a DM to evaluate the context, depending on the problematic.

The PROMETHEE method (preference ranking organization method for enrichment evaluation) [23] is an outranking method based on two stages: building the outranking relation and exploring this relation to support the decision process.

The first stage includes enriching the preference structure, in which the notion of a generalized criterion is introduced with the aim of capturing the range of differences between the evaluations of each criterion, that is, describes the intensity of preference of alternative on alternative , for a given criterion , which is denoted by and takes values between 0 and 1. Each criterion is associated with a generalized criterion, which can be type 1 (usual), type 2 (quasicriterion), type 3 (linear preference criterion), type 4 (criterion in level), type 5 (criterion of linear preference with an indifference zone), or type 6 (Gaussian).

At the second stage there is the exploration of the outranking relation with the aim of supporting the decision process. The preference index is calculated , which is defined as Brans and Vincke [23] and expressed by the following equation, where , , representing the weights, that is, relative importance of each criterion:

Moreover, the outflow (positive outranking flow: ) and inflow (negative outranking flow: ) should also be calculated by the following equations, respectively:

The complete preorder of PROMETHEE II is defined as [1]. The net flow, , represents the balance between strengthen and weakness of the alternative and it is expressed by (4). The higher the net flow is, the better the alternative is:

As can been seen, the PROMETHEE II [1] method is based on an outranking relation approach to obtain the ranking of alternatives using parameters such as weights from a DM’s preferences to aggregate information and to indicate the performance values of criteria. However, there are situations in which the DM is not able to define the values of the weights of the criteria. This is an opportunity to apply methods that facilitate analyzing imprecise information about criteria in a decision model based on outranking methods [2426].

Multicriteria decision problems demand an elicitation process of numeric values which indicate the exact value of the importance of the criteria considered in the process of evaluating a decision. The PROMETHEE II method has a noncompensatory rationality and the meaning of criteria weights is related to the degree of their relative importance [25]. Methods which adopt elicitation weights consider simple procedures so as to determine a precise numerical weight that represents the information extracted from the DM’s preferences. In addition, there are some advanced procedures, such as trade-off methods [27], SMARTS [28], SWING [29], and SMARTER [30], which have often been applied in additive aggregation models based on multicriteria procedures.

These kinds of elicitation processes are structured so as to maintain the coherence of the DM’s preferences when there is accuracy about the values of the importance of criteria. On the other hand, in many situations, defining these values is often complex and a problematic process due to there being imprecise information in real-life multicriteria decision making and the significant difficulty of expressing a detailed assessment of the weights. According to Wang et al. [31], human judgment regarding preferences is difficult to estimate. From this perspective, there are some approaches which consider imprecise information so as to maintain the ratio weights for the decision problem.

One important category of numerical methods enables the criteria to be ranked in accordance with the DM’s preferences and to relate this ranking to the degree of importance of each criterion in the problem context by converting this ranking into numerical values. These methods facilitate the elicitation of weights from the DM and minimize the effort that the DM needs to make to determine the ordinal importance of the criteria for the decision problem. These methods are recognized as surrogate weights.

For many multicriteria procedures, several authors suggest specific functions for determining criteria weights based on the surrogate weights process, since the DM has determined the information about the order of importance of the criteria [29, 3236]. There are different methods for eliciting weights. However, the conditions needed are given by ranking the weights of the criteria (), where is the number of criteria and , which in most of these methods considers the -dimension space () given by

There are many traditional studies in the literature comparing multicriteria weights and evaluating how to convert rankings into numerical weights [26, 37, 38]. The most common proposals for such conversion are rank sum (RS), rank reciprocal (RR) [32], and rank-order centroid (ROC) [33]. Nevertheless, when comparing the weights provided by RS, RR, and ROC, it is possible to notice significant differences. On the other hand, there also measures as maximum entropy ordered weighted averaging (MEOWA), reformulated by [39] as minimal variability that it seems to perform similar with ROC and outperforms the other surrogate weights. Besides, there are also other studies using ROC, which is considered to be the most promising, including cardinal preference in the rank order for additive multicriteria models [40].

In that perspective, in this model ROC weight is applied, since it presents many advantages and it is widely applied in multicriteria models [33]. ROC is a direct weight elicitation technique that consists of ordering the objectives or criteria of the decision problem from the most important to the least important and uses the formula described by the following equation for assigning weights in the problem:

According to (6), is the total number of objectives or criteria, in a multicriteria decision problem, and is the weight for the th criteria in its position in the ranking. ROC identifies the extreme points in the weight space given by (5) and determines the weights based on the centroid of this space [26, 33]. The use of ROC is considered in several contextual analyses due to its quality and simplicity in the process of assigning weights. Thus, in multicriteria decision problems this method is widely recommended for dealing with imprecise information about the importance of criteria [26, 33].

Thus, the use of ROC presents a significant contribution and can facilitate assigning weights in a decision ranking problem. The DM is required to order the criteria by their importance for the decision problem. ROC weights have an appealing theoretical rationale and appear to perform better than other rank-based schemes in terms of making an accurate choice. This consideration enables ROC to be considered as an approach that is compatible with the conceptual structure of PROMETHEE methods.

Thus for this situation of technology readiness assessment for generating energy where the DM is able to order all criteria by their relative importance and feels comfortable about this, the proposed approach of surrogate weights in PROMETHEE II method can be applied. We call this new approach the PROMETHEE-ROC model. Its structure is presented in Figure 2.

Figure 2: Steps of the multicriteria decision model proposed.

In the first part of the multicriteria decision model, the requirements of the problem context are recognized and identified. Thus, this step of the model is related to structuring the problem. In this step, the DM may establish the objectives and criteria for the problem analyzed. The set of objectives and criteria must be determined to represent all of the requirements of the assessment process in a nonredundant and concise way. In the second step, the DM should identify the viable set of critical technologies needed to build the evaluation matrix by considering the performance of each critical technology for each criterion in the decision problem. This task is important and it may require some of the previous steps to be reviewed.

It is in the steps that follow that the main contribution of the decision model is introduced, namely, the logical structure of the PROMETHEE-ROC approach. This is able to process and provide information about a ranking decision process in this context. Therefore, the focus is placed on the context of generating energy as power and aims to extract information from prioritizing critical technologies in TRA.

In order to provide a final recommendation, which is the last step of the model, it is necessary to conduct a sensitive analysis. This sensitive analysis is based on a Monte-Carlo simulation and Kendall’s tau coefficient to test the correlations. Furthermore, it is worth it to emphasize that this sensitive analysis can minimize the effects of one of the critics to the ROC procedure. It has often been argued [41] that ROC procedure puts larger weights on those criteria ranked highest up in the rank order, being perceived to be too sharp or discriminative. This concern is related to a particular situation, in which this could not be close to the DM’s real preference, although this does not often happen. In order to investigate the impact of such a situation, the simulation process of sensitive analysis may consider different procedures for generation of random weights. In this case, a greater variation for the first ranked weights, with skewed distribution, may be applied, in order to examine this particular situation.

In the next section, this model is illustrated by means of an application based on a real case in Brazil.

4. Applying the Model

This section is based on a real case in which the model proposed and described above was applied in order to assess the readiness of technology for generating power energy in Brazil. A preliminary analysis of this problem has been reported as conference communication [42], which consisted of a pilot application in order to integrate all the actors in the process, with diverse criteria and numerical result for illustrative purpose. Furthermore, in that experimental application a whole sensitivity analysis with simulation was not included.

The application follows the steps presented in the Figure 2, for the multicriteria decision model while the decision support system developed for this case is also presented to facilitate an analysis of understanding the problem.

4.1. Structuring Objectives and Criteria

To facilitate the structuring of the objectives and criteria for the decision problem, strategic options development analysis (SODA) [43, 44] was used which led to six objectives and nineteen criteria being identified for the process of evaluating critical technologies. The set of the objectives and criteria is presented in Table 1, which also shows their code, whether the interest is in minimizing or maximizing the criterion and the unit or measurement scale for each criterion.

Table 1: Objectives and criteria.

The units or measurement scales of the criteria are the reference parameters for the analyses of the critical technologies. Table 1 presents the impact scale (IS), time scale (TS), and curtailment condition scale (CS). The details about these scales are shown in Table 2 (a, b, and c).

Table 2: (a) Impact Scale (IS). (b) Time Scale (TS). (c) Curtailing Condition Scale (CS).
4.2. Establishing the Set of Critical Technologies

In the decision context, the selection process of the set of alternatives led to fourteen critical technologies being identified, which are distributed in five technological areas. The technological area and subarea and the code of the critical technology are shown in Table 3.

Table 3: Set of alternatives.
4.3. Establishing the Ranking of the Criteria and Computing the Criteria Weights

One of the steps of the PROMETHEE-ROC is to establish the ranking of the criteria and to compute the criteria weights. This step is referenced by (6) such that computing the criteria weights is done after describing each criterion for the problem and is based on ROC weights [25, 33]. The decision model results in each criterion aggregating information about its attributes so as to define its influence on the decision problem, including its position in the ranking related to its degree of importance for the problem, while taking into consideration the order to prioritize the criteria taken from the DM’s preferences.

The next step is to introduce the evaluating process that will analyze the critical technologies as decision alternatives. This task leads to a consequence matrix being built, which evaluates the alternatives by criterion, using the scale shown in Table 2. The step of applying ROC weights and evaluating the critical technologies by criterion is supported by a decision support system and is illustrated in Figure 3.

Figure 3: Establishing the consequence matrix and the weights of criteria.

For the context analyzed, the DM considered the usual preference function for all criteria. This function indicates that any difference between alternative performances represent a strict preference. Thus parameters of preference and indifference are indicative of a null value in the concepts of the PROMETHEE [23] method. The use of ROC weights minimizes the effort that a DM needs to make in the process for indicating the degree of importance of the criteria. Based on the consequence matrix and the value of the criteria, it is possible to evaluate the performance of the alternatives by implementing a multicriteria method.

4.4. Evaluating the Critical Technologies

The critical technologies are evaluated based on PROMETHEE-ROC. The mathematical structure of the multicriteria method offers the first recommendation extracted from the decision model in which it considers outranking relation theory to obtain the ranking of alternatives. From the first recommendation, the performance of the critical technologies can be evaluated using a total flow value obtained by instructions from the multicriteria method implemented, and the final result is illustrated by graphic resources as illustrated in Figure 4.

Figure 4: Final ranking of the alternatives.

The tools of this system offer spreadsheets and graphic resources that export results for the decision process. In accordance with the results, the critical technology which must be prioritized in the first instance is Aut, followed by FotS and Wind. The last position is taken by BIL. The reference of the name of the alternatives may be seen in Table 3. This result reflects the analysis using ROC weights to represent the importance of criteria in the decision problem. For this perspective, the decision model provides a last step implemented by an information system to obtain arguments that will define a final recommendation for TRA.

4.5. Sensitivity Analysis and Final Recommendation

Finally, a Monte-Carlo sensitivity analysis is implemented to verify how sensitive the results are when weights change and/or there is a change in the evaluation matrix. From this step, other scenarios may be built to evaluate flexible results in a range of percentage variation of the weights values and evaluation matrix considering the probability distribution to configure other thresholds for the parameters in the decision problem and to analyze the possible changes in the results.

It is worth it to notice that the model also incorporates different setting up of parameters variation for the sensitivity analysis. The DM can choose to vary all criteria at the same time or evaluate the results changing a single criterion per turn. This happens at the same way when analyzing the changes in the evaluation matrix. This generates an asymmetric distribution and allows an important analysis by DM.

For the context analyzed, the DM preferred to simulate one hundred thousand cases, considering a variation of twenty percent in weight values. This variation assumed triangular distribution so as to obtain new values for weight values.

In addition, a correlation statistical test is carried out to analyze the accuracy of the sensitivity analysis outcomes. This test considers a significance level to accept or not to accept the hypothesis that there is association between two results in each case of the simulation process. The simulation proposed by DM assumes a significant level of 0.05% for alpha considering Kendall’s tau coefficient for testing the correlation. The results reject the hypothesis that there is no correlation among the results. In other words, the simulation is considered coherent and consistent with a significance level of 0.05%. Thus, it is possible to show the outcomes of the simulation.

The information system allows visual analysis considering the results of the simulations. The results obtained from the sensitivity analysis are computed based on the percentage of change in relation to the original result verified by Figure 4. In other words, for each new result obtained from a number of simulations, the new ranking of alternatives is verified and is compared with the original ranking. The changes are computed and shown in a table and in graphic resources, as in Figure 5.

Figure 5: Sensitivity analysis for first position.

Figure 5 shows the analysis related to first position of the original result, the alternative known as Aut. Considering the scenario determined by the DM, it can be concluded that, in 99.993% of the cases simulated, the alternative Aut remained in the first position and, in the other 0.007%, this alternative was shifted. The graphic resources assist the preview of the possible changes. For example, Aut was allocated to second position in accordance with the simulation proposed. The position shift of the alternatives can present a percentage distribution over the rankings. This reading can be made for all alternatives. Figure 6 illustrates the sensitivity analysis for last position.

Figure 6: Sensitivity analysis for last position.

Similarly, for BIL in last position, in 2.742% of the cases simulated this alternative was shifted to twelfth and thirteenth positions. This sensitivity analysis enables details and possible differences in total flow of the method results to be seen and helps to observe how the alternatives are sensitive to changes in the weights of criteria. Thus, the DM may assess the results and obtain the final recommendation in accordance with his or her preferences. The DM feels confident with the presentation of the outcomes and uses the recommendations for the decision process related to the problem.

5. Concluding Remarks

This paper presented a multicriteria decision model for prioritizing technology readiness for the energy sector. An integration of surrogate weights was made with the PROMETHEE method and PROMETHEE-ROC was proposed which has the advantage of requiring only ordinal information of the criteria from the DM. Thus, by using surrogate weights, the effort that the DM needs to make in giving the degree of the importance of the criteria decreases.

In the literature the use of ROC has been found to be a quite relevant issue to the additive model, which is a compensatory approach, completely different of PROMETHEE method, for which no similar proposition has been done before, unless in an initial conference communication. On the other hand, a preliminary study [45] has shown that the ROC procedure has the best performance for PROMETHEE method, as it has been found for the additive model [33].

Surely, the use of surrogate weights for a noncompensatory approach is also relevant. The straightforward generation of weights, when using ROC, is quite relevant for real world applications and it may contribute to increase the use of this kind of method in a more direct way.

However, it should be clear that the ROC has been found to be a good surrogate weights procedure when using generated random vectors, assuming that the modeling regarding DM’s mind sets are then inherent in the generation of the decision problem vectors by a random generator. That is, there is a chance, although not significantly high that the DM’s preference might be diverse. On the other hand, this proposed procedure is appropriate for situations, in which the DM’s is not able to give any information about these weights, as explained.

The model was applied to a problem based on a real context of evaluating the technologies that should be encouraged for use in generating energy in Brazil. 14 critical technologies were evaluated in such technological areas as chemicals, optics, telecommunications, mechanics, and electric energy. As a result, a strategic decision could be made in a more structured way.

This model could be applied in other problems, since the DM has a noncompensatory rationality and could not give complete information about the weights of criteria but is able to give partial information about them. Future work is going to be conducted in order to adapt this model for a group decision situation, which is one of the main challenges for this kind of context analyzed.

Conflict of Interests

The authors declare that there is no conflict of interests regarding the publication of this paper.


This study is part of a research program funded by the Brazilian Research Council (CNPq). The authors are grateful to the constructive critics from the anonymous reviewers, which have been useful for improvement of the final version of this paper, particularly the insightful recommendations for the setting up of the parameters for the sensitivity analysis.


  1. J. P. Brans and B. Mareschal, PROMÉTHÉE-GAIA: une méthodologie d'aide à la décision en présence de critères multiples, Éditions de L'Université de Bruxelles, Bruxelles, Belgium, 2002.
  2. M. Behzadian, R. B. Kazemzadeh, A. Albadvi, and M. Aghdasi, “PROMETHEE: a comprehensive literature review on methodologies and applications,” European Journal of Operational Research, vol. 200, no. 1, pp. 198–215, 2010. View at Publisher · View at Google Scholar · View at Scopus
  3. R. Vetschera and A. T. de Almeida, “A PROMETHEE-based approach to portfolio selection problems,” Computers and Operations Research, vol. 39, no. 5, pp. 1010–1020, 2012. View at Publisher · View at Google Scholar · View at Scopus
  4. A. T. de Almeida and R. Vetschera, “A note on scale transformations in the PROMETHEE V method,” European Journal of Operational Research, vol. 219, no. 1, pp. 198–200, 2012. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  5. C. A. Virgínio Cavalcante, R. J. Pires Ferreira, and A. T. de Almeida, “A preventive maintenance decision model based on multicriteria method PROMETHEE II integrated with Bayesian approach,” IMA Journal of Management Mathematics, vol. 21, no. 4, pp. 333–348, 2010. View at Publisher · View at Google Scholar · View at Scopus
  6. D. C. Morais and A. T. de Almeida, “Group decision-making for leakage management strategy of water network,” Resources, Conservation and Recycling, vol. 52, no. 2, pp. 441–459, 2007. View at Publisher · View at Google Scholar · View at Scopus
  7. M. F. Abu-Taleb and B. Mareschal, “Water resources planning in the Middle East: application of the PROMETHEE V multicriteria method,” European Journal of Operational Research, vol. 81, no. 3, pp. 500–511, 1995. View at Publisher · View at Google Scholar · View at Scopus
  8. M. E. Fontana and D. C. Morais, “Using promethee V to select alternatives so as to rehabilitate water supply network with detected leak,” Water Resources Management, vol. 27, no. 11, pp. 4021–4037, 2013. View at Publisher · View at Google Scholar · View at Scopus
  9. V. B. S. Silva, D. C. Morais, and A. T. Almeida, “A multicriteria group decision model to support watershed committees in Brazil,” Water Resources Management, vol. 24, no. 14, pp. 4075–4091, 2010. View at Publisher · View at Google Scholar · View at Scopus
  10. F.-K. Wang, C.-H. Hsu, and G.-H. Tzeng, “Applying a hybrid MCDM model for six sigma project selection,” Mathematical Problems in Engineering, vol. 2014, Article ID 730934, 13 pages, 2014. View at Publisher · View at Google Scholar · View at Scopus
  11. E. V. Veraszto, D. da Silva, N. A. de Miranda, and F. O. Simon, “Tecnologia: buscando uma definição para o conceito,” Revista Prisma, no. 7, 2008. View at Google Scholar
  12. F. C. L. Melo, J. R. Gomes, M. L. Gregori, and M. C. V. Salgado, “A Tecnologia Crítica na Área Espacial Brasileira,” Revista Espaço Brasileiro, no. 12, pp. 24–25, 2011. View at Google Scholar
  13. J. W. Schot and A. Rip, “The past and future of constructive technology assessment,” Technological Forecasting and Social Change, vol. 54, no. 2-3, pp. 251–268, 1997. View at Publisher · View at Google Scholar · View at Scopus
  14. J. C. Mankins, “Technology readiness assessments: a retrospective,” Acta Astronautica, vol. 65, no. 9-10, pp. 1216–1223, 2009. View at Publisher · View at Google Scholar · View at Scopus
  15. USA Department of Defense—DoD, Technology Readiness Assessment (TRA) Deskbook, Director, Research Directorate (DRD), 2009.
  16. C. Wei-Gang, L. Wo-Ye, G. Yan, and H. Fei, “Approach and application of technology readiness assessment based-on multilevel reference condition,” in Proceedings of the 20th International Conference on Management Science & Engineering, pp. 1993–1998, Harbin, China, July 2013. View at Publisher · View at Google Scholar
  17. W. Chen, S. Jin, M. Zhang, and X. Chen, “Technology readiness assessment and application in the engineering development phase,” in International Conference on Quality, Reliability, Risk, Maintenance, and Safety Engineering (ICQR2MSE '12), pp. 1493–1496, June 2012.
  18. V. H. Hoffmann, G. J. McRae, and K. Hungerbühler, “Methodology for early-stage technology assessment and decision making under uncertainty: application to the selection of chemical processes,” Industrial and Engineering Chemistry Research, vol. 43, no. 15, pp. 4337–4349, 2004. View at Publisher · View at Google Scholar · View at Scopus
  19. X. Li and D. Zhu, “Object technology software selection: a case study,” Annals of Operations Research, vol. 185, no. 1, pp. 5–24, 2011. View at Publisher · View at Google Scholar · View at Scopus
  20. Z. K. Demirkiran and T. Altunok, “A systems approach for technology assessment and selection,” in Proceedings of the IEEE/AIAA 31st Digital Avionics Systems Conference (DASC '12), pp. E11–E111, Williamsburg, Va, USA, October 2012. View at Publisher · View at Google Scholar · View at Scopus
  21. M. M. Goetghebeur, M. Wagner, H. Khoury, R. J. Levitt, L. J. Erickson, and D. Rindress, “Bridging health technology assessment (HTA) and efficient health care decision making with multicriteria decision analysis (MCDA): applying the evidem framework to medicines appraisal,” Medical Decision Making, vol. 32, no. 2, pp. 376–388, 2012. View at Publisher · View at Google Scholar · View at Scopus
  22. P. Thokala and A. Duenas, “Multiple criteria decision analysis for health technology assessment,” Value in Health, vol. 15, no. 8, pp. 1172–1181, 2012. View at Publisher · View at Google Scholar · View at Scopus
  23. J.-P. Brans and P. Vincke, “A preference ranking organisation method,” Management Science, vol. 31, no. 6, pp. 647–656, 1985. View at Publisher · View at Google Scholar · View at MathSciNet
  24. T. Solymosi and J. Dombi, “A method for determining the weights of criteria: the centralized weights,” European Journal of Operational Research, vol. 26, no. 1, pp. 35–41, 1986. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  25. J. C. Vansnick, “On the problem of weights in multiple criteria decision making (the noncompensatory approach),” European Journal of Operational Research, vol. 24, no. 2, pp. 288–294, 1986. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  26. F. H. Barron and B. E. Barrett, “Decision quality using ranked attribute weights,” Management Science, vol. 42, no. 11, pp. 1515–1523, 1996. View at Publisher · View at Google Scholar · View at Scopus
  27. R. L. Keeney and H. Raiffa, Decisions with Multiple Objectives: Preferences and Value Trade-offs, John Wiley & Sons, New York, NY, USA, 1976. View at MathSciNet
  28. W. Edwards, “How to use multivariate utility measurement for social decision making,” IEEE Transactions on Systems, Man and Cybernetics, vol. 7, no. 5, pp. 326–340, 1977. View at Publisher · View at Google Scholar · View at Scopus
  29. D. von Winterfeldt and W. Edwards, Decision Analysis and Behavioural Research, Cambridge University Press, New York, NY, USA, 1986.
  30. W. Edwards and F. H. Barron, “SMARTS and SMARTER: improved simple methods for multiattribute utility measurement,” Organizational Behavior and Human Decision Processes, vol. 60, no. 3, pp. 306–325, 1994. View at Publisher · View at Google Scholar · View at Scopus
  31. C.-Y. Wang, P.-H. Tsai, and H. Zheng, “Constructing taipei city sports centre performance evaluation model with fuzzy MCDM approach based on views of managers,” Mathematical Problems in Engineering, vol. 2013, Article ID 138546, 13 pages, 2013. View at Publisher · View at Google Scholar
  32. W. G. Stillwell, D. A. Seaver, and W. Edwards, “A comparison of weight approximation techniques in multiattribute utility decision making,” Organizational Behavior and Human Performance, vol. 28, no. 1, pp. 62–77, 1981. View at Publisher · View at Google Scholar · View at Scopus
  33. F. Hutton Barron, “Selecting a best multiattribute alternative with partial information about attribute weights,” Acta Psychologica, vol. 80, no. 1–3, pp. 91–103, 1992. View at Publisher · View at Google Scholar · View at Scopus
  34. F. A. Lootsma, Multi-Criteria Decision Analysis via Ratio and Difference Judgement, Kluwer Academic Publishers, Dordrecht, The Netherlands, 1999. View at MathSciNet
  35. F. A. Lootsma and P. W. G. Bots, “The assignment of scores for output-based research funding,” Journal of Multi-Criteria Decision Analysis, vol. 8, no. 1, pp. 44–50, 1999. View at Publisher · View at Google Scholar
  36. J. González-Pachón and C. Romero, “Aggregation of partial ordinal rankings: an interval goal programming approach,” Computers and Operations Research, vol. 28, no. 8, pp. 827–834, 2001. View at Publisher · View at Google Scholar · View at Scopus
  37. T. J. Stewart, “Use of piecewise linear value functions in interactive multicriteria decision support: A Monte Carlo study,” Management Science, vol. 39, no. 11, pp. 1369–1381, 1993. View at Publisher · View at Google Scholar
  38. B. S. Ahn and K. S. Park, “Comparing methods for multiattribute decision making with ordinal weights,” Computers & Operations Research, vol. 35, no. 5, pp. 1660–1670, 2008. View at Publisher · View at Google Scholar · View at Scopus
  39. R. Fullér and P. Majlender, “On obtaining minimal variability OWA operator weights,” Fuzzy Sets and Systems, vol. 136, no. 2, pp. 203–215, 2003. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  40. M. Danielson, L. Ekenberg, A. Larsson, and M. Riabacke, “Weighting under ambiguous preferences and imprecise differences in a cardinal rank ordering process,” International Journal of Computational Intelligence Systems, vol. 7, no. 1, pp. 105–112, 2014. View at Publisher · View at Google Scholar · View at Scopus
  41. V. Belton and T. Stewart, Multiple Criteria Decision Analysis: An Integrated Approach, Kluwer Academic, London, UK, 2002.
  42. A. T. de Almeida, D. C. Morais, L. H. Alencar, T. R. N. Clemente, E. M. Krym, and C. Z. Barboza, “A multicriteria decision model for technology readiness assessment for energy based on PROMETHEE method with surrogate weights,” in IEEE International Conference on Industrial Engineering and Engineering Management (IEEM '14), Selangor, Malaysia, 2014.
  43. C. Eden, “Analyzing cognitive maps to help structure issues or problems,” European Journal of Operational Research, vol. 159, no. 3, pp. 673–686, 2004. View at Publisher · View at Google Scholar · View at Scopus
  44. F. Ackermann, “Problem structuring methods ‘in the Dock’: arguing the case for Soft OR,” European Journal of Operational Research, vol. 219, no. 3, pp. 652–658, 2012. View at Publisher · View at Google Scholar · View at Scopus
  45. T. R. N. Clemente, A. T. de Almeida, and A. T. de Almeida-Filho, “Comparing decision rules for surrogate weights on PROMETHEE method,” CDSID Working paper, Recife, 2014. View at Google Scholar