Table of Contents Author Guidelines Submit a Manuscript
Mathematical Problems in Engineering
Volume 2015, Article ID 956757, 14 pages
Research Article

A Novel AHP-Based Benefit Evaluation Model of Military Simulation Training Systems

1Department of Management Sciences, R.O.C. Military Academy, Kaohsiung 830, Taiwan
2Department of Industrial Engineering and Management, National Chiao Tung University, Hsinchu 300, Taiwan

Received 1 July 2014; Accepted 22 December 2014

Academic Editor: Guangming Xie

Copyright © 2015 Kuei-Hu Chang et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.


With the constantly changing patterns of war, more technologically exquisite weapons are designed, increasing in cost and complexity. Training maneuvers with live ammunition are expensive and are prone to accidental casualties. Thus, many countries are gradually adopting simulation training systems to replace some actual exercises to reduce casualties and still maintain maximum combat readiness. However, each simulation training system has a different background with regard to time, source, function goal, and quality of environment and staff. It is also more difficult to assess the benefit of simulation training systems. Moreover, traditional benefit assessments of military simulation training systems have merely considered their efficiency, not safety, causing biased conclusions. To solve these issues, this paper integrates the analytic hierarchy process (AHP), important-performance analysis (IPA), and the 2-tuple fuzzy linguistic representation model to determine the benefits of simulation training systems. To verify the proposed approach, a numerical example of the evaluation of a training simulator system’s benefit is adopted. Compared with the traditional AHP method, the proposed method does not lose any valuable information that is provided by experts and also considers training safety. Further, these data are presented in 2-dimensional graphs for managers to further guide the decision-making process.

1. Introduction

Recently, the price of weapons and equipment has risen substantially, and human lives are taken into account during training. This has caused training simulators and model-based simulation systems to be adopted gradually instead of actual soldiers for training purposes. Thus, nations have invested in simulators, based on experiments and research in military training areas, and have confirmed their effectiveness and efficiency. Link Trainer was the first true flight simulator, created in 1929 to assist the development of digital flight simulation and diversified combat systems. During the Gulf War in 1991, US armored forces used simulators to design training material and war games for desert battlefields. Simulation training not only reduces training costs, but also lowers casualty rates and leads to higher success rates. There are several advantages of training simulators, such as reduced casualty rates and training expenses, ability to operate independently of weather, and lower equipment loss, and they allow soldiers to learn self-control, increase their willingness to learn and fully understand battlefield conditions, and improve training efficiency. Simulators have been applied in many areas in recent years, such as the aerospace [1], industrial [2], medical [3], military [47], and nuclear power industries [8] and pilot training [911]. Although training simulators are used widely in many military fields, however, due to the shortage of defense budgets, the training and maintenance budget of simulators would be limited. Thus, decreasing training costs and maximizing their benefit with limited resources have become critical issues for decision-maker; but, some aspects, such as the time of simulator installation, simulator function, simulator lifecycle, and purpose of training, must be considered with qualitative and quantitative data.

The analytical hierarchy process (AHP) was proposed by Saaty in 1980 as an analytical method that considers qualitative and quantitative data [12, 13] for determining the hierarchy, structure, and quantification of problem. AHP deconstructs a problem from higher to low layer with hierarchy relationships within complicated problems, by a series of simple comparisons to address complex decision-related problems, and many studies have demonstrated that it is a powerful multicriteria decision-making tool for complex problems [14]. Based on these characteristics, AHP has been applied to obtain solutions for complicated problems in simulator research, for which many studies have been published. For example, Bosch-Mauchand et al. [15] used the AHP value chain simulator to help experts make decisions with regard to industrial product design and industrial process design. Ding et al. [16] applied AHP and load-shedding scheme methods to determine the critical factors of power loading using simulator experiments with a destroyer power system. Lu et al. [17] developed a collision avoidance decision model with the AHP method that could design a driving simulator that contained a variety of scenarios to truly reflect real-life transportation environments.

With the rising awareness of human rights, safety is the top priority of training methods, and using simulators can reduce unexpected casualties during training. However, in the past, the evaluation of simulation training systems has focused on performance and cost, failing to consider training safety in the evaluation. Such a pattern can lead to assessment results that not only are not objective but also do not conform to actual situations. To address these problems, importance-performance analysis (IPA) was introduced by Martilla and James in 1977 as an analytical method to measure the importance and performance of problem attributes [18] using 2-dimensional graphs to represent the relationship between importance and performance. Chou and Ding [19] developed a performance appraisal method that combined multiple criteria decision-making (MCDM) and IPA for ocean transportation environments, providing recommendations to improve port service quality through the evaluation of service quality and port location choices. Lin [20] proposed a method that combined a balanced scorecard and IPA and applied it to review the public service of the local Taiwanese government. Chen [21] proposed a more comprehensive performance-evaluation matrix that integrated IPA and an IPA matrix that prioritized quality improvement actions in Taiwanese high-tech industries. Ho et al. [22] used IPA, multiple regression analysis, and the decision-making trial and evaluation laboratory (DEMATEL) technique to evaluate a supplier’s performance in the industrial computer industry.

Because evaluations of simulators usually lose important information and cause biased conclusions, this paper adopted a 2-tuple linguistic representation model to aggregate semantic information. The 2-tuple linguistic representation model was proposed by Herrera and Martinez in 2000 [23] and is based on the concept of symbolic translation into computing with words without losing information. The 2-tuple linguistic model has been used successfully in a wide range of applications, such as evaluation and review of fuzzy linguistic programs [24], web quality evaluations [25], sensory evaluations [26], risk assessment of the color super twisted nematic [27], and engineering evaluation processes [28]. To resolve the problems of simulator evaluations, this study proposed a novel benefit evaluation model using an AHP-based ranking technique and applied the IPA method to plot values on a 2-dimensional graph to help managers understand the actual benefits for each simulator. Further, to avoid losing valuable information, this paper also applied the 2-tuple fuzzy linguistic representation method to aggregate the questionnaire results to provide suggestions with regard to resource allocation, based on the evaluation results of training simulators. Thus, training benefits can be enhanced, and fewer training resources can be wasted.

The remainder of this paper is organized as follows. In Section 2, the literature is reviewed briefly. A novel approach that integrates the AHP, IPA, and 2-tuple fuzzy linguistic representation methods is proposed in Section 3. A numerical example of the ranking of simulator benefits is adopted, and comparisons with other approaches are discussed in Section 4. The final section presents our conclusions.

2. Related Works

2.1. AHP Method

The AHP method was developed by Professor Saaty at the University of Pittsburgh in 1980 [12]. Usually, the complexity of a problem is decided by the interaction of many factors, and decision-makers must understand the significant criteria when they face such problems. They need to assess the relative importance of these factors to solve the problems. The AHP method applies an organization hierarchy structure to decompose and prioritize the influencing factors from high to low and top to bottom and determines the relative importance of factors as a single value, based on subjective judgments. Finally, it can decide which critical factors have greater influence by numerical analysis. Thus, the AHP method can help make decisions effectively and simplify decision-making [29].

In the AHP method, the decision-maker must initially know the problem clearly and determine all of its impacting criteria to establish hierarchy relationships between the factors. The data collection is implemented using questionnaires with pair comparison. The pair comparisons are used to build a pairwise comparison matrix to calculate eigenvalues and eigenvectors to determine the weight of each factor. The operation proceeds as follows.

(1) Definition problems and decision goals: when decision-makers use the AHP method to solve complex problems, the problems must be understood clearly. All impacting factors must be listed, categorized, and analyzed.

(2) Build the hierarchy structure of problems and calculate weight: this stage decides the goal of solving the problems and each criterion that can achieve this goal. These criteria can be prioritized by Delphi method to identify the relatively important factors, which are categorized and divided by hierarchy and feature. The weight of each hierarchical segment is calculated as follows.(a) Build a pairwise comparison matrix of each hierarchy class: the data are collected by questionnaires after building a hierarchy structure system of impacting factors. The decision-makers calculate their relative weights by pairwise comparison of relative importance between criteria. The eigenvector is calculated for these pair comparisons of the criteria, which, according to Saaty [12], are represented by 9 scales, as in Table 1. The 9 scales could be sorted as equal, moderately, strongly, very strongly, and extremely, and the remaining 4 scales are intermediate values in these 5 adjacent scales.If there are impacting factors in the problem, comparisons of pairs are needed. In the matrix, each element is positive, and the corresponding value is inversed per the following: (b) Calculate the eigenvalues and eigenvectors: the eigenvalues and eigenvectors are calculated for pairwise matrix after completing the pairwise comparison matrix.(c) Test consistency: subjective consciousness is involved in the evaluation results that must pass the consistency test to ensure that the judgment is consistent; otherwise, it is viewed as an invalid questionnaire. Thus, Saaty [12] suggested using the consistency index (CI) to determine whether the pairwise comparison matrix is a consistent matrix and whether the pairwise comparison matrix has the proper consistency ratio (CR). If the CR < 0.1, consistency is achieved [13], and the random index (RI) in (3) depends on the value of (rank of the matrix), seen in Table 2:

Table 1: The 9 scales of pairwise comparison [30].
Table 2: Random index table [12].

(3) Calculate the weight of each level for the overall hierarchy.

(4) After calculating each factor’s weight at each level, determine and prioritize the overall hierarchical weighting by influence to further map out the optimal solution.

2.2. IPA Method

The IPA method was introduced by Martilla and James in 1977, using graphs to reflect the advantages and weaknesses of service quality [18] in the car sales industry to evaluate its 14 attributes. They used performance as the -axis and importance as the -axis and plotted the points with an average score of importance and performance on the 2-dimensional graph. Then, the strength and weakness of service quality could be understood to develop the appropriate marketing strategy.

With the 4 quadrants of the IPA method, it can identify the importance and satisfaction for each attribute of a product, and enterprises can determine the advantages and disadvantages of their products and improve and modify poorly performing areas of the products to enhance their strength and performance. This matrix assigns a prioritization of attributes for improvement, as in Figure 1, and the definitions of the 4 quadrants are as follows.

Figure 1: The original importance-performance analysis framework [18].

(1) Quadrant I (Keep up the Good Work). A product in this quadrant means that it is important for the customer and performs well; it could be a benefit to the business to keep this competitive advantage.

(2) Quadrant II (Concentrate Here). A product in this quadrant means that it is important for customers but the performance does not meet their expectation. Therefore, allocating resources to improve product performance in this quadrant take priority, this would be a key success factor for enterprises for future development.

(3) Quadrant III (Low Priority). A product in this quadrant means that the customer thinks it is unimportant has dissatisfactory performance. Therefore, its production could be ignored or maintained until more resources are acquired. A product in this area might improve after all products in quadrant II are improved.

(4) Quadrant IV (Possible Overkill). A product in this quadrant means that customers pay less attention but that its performance is beyond expectations. To optimize the distribution of resources, it would be better to direct effort and budget to items that are more important for the enterprise rather than invest in products in this quadrant.

2.3. 2-Tuple Fuzzy Linguistic Representation Model

Humans communicate by delivering information through semantic narration. However, the semantic information that is received could be subjective, uncertain, and fuzzy according to an individual’s preference. “Linguistic variable,” proposed by Zadeh in 1975 [31], is a variable that is based on a word or phrase in natural language. For example, a questionnaire usually uses relative words, such as very good, good, common, bad, and very bad, to evaluate the level of importance of a criterion in surveys.

The 2-tuple fuzzy linguistic representation model was discussed by Herrera and Martinez in 2000 [23]. This semantic approach uses the 2-tuple parameters model and applies to express semantic information. “,” as a symbol of the fuzzy sematic method, is used to represent the value of item in a questionnaires (i.e., : absolutely weak, : very weak, : fairly weak, : slightly weak, : common, : slightly strong, : fairly strong, : very strong, and : absolutely strong). represents the deviation of a value that reflects the gap between the actual value and fuzzy sematic, and represents each item. Figure 2 shows the difference between the 2-tuple fuzzy linguistic representation model and semantic variable. The semantic variable is between and , and it can be represented as . Definitions 1 and 2 as stated in [23, 32] are restated here for completeness.

Figure 2: Example of 2-tuple fuzzy semantic representation.

Definition 1 (see [23, 32]). Let be a set of linguistic variable, and is a set of 2-tuple fuzzy linguistic representations; then, the formula represents the 2-tuple linguistic, as follows: is the indicator most close to , and is the deviation between the actual value and fuzzy linguistic.

Definition 2 (see [23, 32]). Let be a set of 2-tuples; the arithmetic mean is computed as follows:
If and are two 2-tuple linguistic expressions, the comparison of linguistic levels is described asif , then ,if , there are three possible situations:(1)if,andrepresent the same linguistic meaning;(2)if, then;(3)if, then.

3. Proposed AHP-Based Ranking Technique

3.1. The Reason for Using the AHP and IPA

Today, training simulators are widely used; their use has not been evaluated extensively. The main reasons for this lack of data include difficulty in data collection from simulators because the time of installation, function, purpose of training, and operator quality are hard to control. These reasons cause incomplete data collection and situations that mixed qualitative and quantitative data, causing managers to evaluate the benefits of simulation training systems inaccurately. AHP is an analysis method that considers qualitative and quantitative data that can deconstruct problems through hierarchy, structure, and quantification from high to low layer with the hierarchy relationships within complicated problems. By using expert questionnaires, values are assigned by subjective judgment with regard to the importance of each variable, and the weights are calculated to prioritize strategic portfolios for managers, leading to better decision-making.

In the past, researchers always paid attention to using achievement or decreasing training costs in evaluating simulation training systems and did not consider safety, merely ranking the sum of assessment values to analyze simulator performance. It is difficult for the managers to understand the priority and meanings simply within. IPA is an analytical method that measures the importance and performance of problem attributes. This study focused on the training safety and simulator efficiency scores for simulators, graphing them in 2-dimension, 4-quadrant plots for managers to realize the correlating and important priorities, based on the relative points. Further, this can help managers understand and analyze decision-making information easily to determine the improvements that need priority, develop a method to improve the overall benefit of a simulator, and rearrange the best portfolio with a limited budget and resources. Therefore, managers can invest in the most profitable simulators, maintain availability, avoid budget waste, enhance utility efficiency, and further strengthen the performance of training simulators.

3.2. The Procedure of the Proposed Approach

The procedure of proposed AHP-based ranking technique is organized into 7 steps as follows.

Step 1 (decide on the evaluation indicators and complete the questionnaire design). Evaluation indicators are designed to assess the benefits of training simulators by professionals and engineers with many years of professional experience. They completed the questionnaire according to the evaluation indicators. There are two parts to the questionnaires: training security and simulator efficiency.

Step 2 (questionnaire). This step is conducted using a professional questionnaire; the 7 evaluation indicators from Step 1 are evaluated and compared using a 1-to-9 rating system, based on the training simulators.

Step 3 (calculate evaluation benefit values). According the original data collected from Step 2, this step is conducted using the 2-tuple fuzzy linguistic representation model for semantic conversion. The 7 indicators of all types of simulators are assessed according to importance, and an algebraic average is computed to obtain an average value of importance.

Step 4 (calculate the weight of assessment indicators). Using evaluation indicators, comparison results are paired to generate a pairwise comparison matrix from Step 2, and the weights of the evaluation indicators are computed per (1), (2), and (3); the outcome is the largest eigenvalue . Lastly, a unity test of the pairwise comparison matrix is conducted using expert choice to run the consistency test.

Step 5 (prioritize overall evaluation benefits). The average benefit evaluation values of the simulators from Step 3 and the benefit evaluation weights from Step 4 are multiplied, and the sum is calculated. Then, the sum of the average weights of “training safety” and “simulator efficiency” for the simulators is obtained. Finally, prioritize the benefits of the simulators according to the sum of the average weight.

Step 6 (analysis of benefits of the simulators). After the sum of the average weights of the simulators is prioritized, an analysis of key investment is conducted by IPA, using “training safety” as the -axis and “simulator efficiency” as the -axis. Scores of the simulator are placed on the coordinates of a 2-dimension, 4-quadrant plot to analyze key investments. Scores in the first quadrant are the most important investments that should be maintained. Those in the second quadrant need resources to increase training efficiency. Those in the third quadrant should be discussed over whether they can be replaced with a limited budget. Those in the fourth quadrant can be moved to investments of greater benefit (e.g., money can be transferred to investments in the second quadrant) to increase the benefits of the overall investments.

Step 7 (provide suggestions for decision-making). After Steps 5 and 6, the 2-dimension, 4-quadrant plot and the priorities of simulator benefit evaluation are completed; decision-makers can make decisions with the results above when budgets are limited. This can be used for future reference and to increase the efficiency of budget allocations.

4. Case Study: Training Simulator

4.1. Overview

High-technology weapons are designed more exquisitely, and their cost is greater, causing many countries to adopt simulators and systems for training in recent years. The advantages of using simulators are avoided by the impact of weather, environmental constraints, and equipment damage; simulating battle situations and environments; reducing training expenses; and increasing the effectiveness and quality of the training. Recently, due to the rising awareness of human rights, safety is the top priority of training. Using simulators can effectively reduce unexpected casualties during training.

In this paper, simulator data were provided by the development and use of units of Taiwan, and this study evaluated 10 simulators. Experts gave each simulator a value from 1 to 9. After expert interviews, 7 indicators were selected to evaluate the benefit: “involvement of weapon in the victory or defeat,” “increase staff familiarity with the equipment,” “reduce training cost,” “improve return on investment level,” “necessity to maintain,” “increase training safety,” and “reduce total casualties.” These evaluation indicators were placed into 2 dimensions, “simulator efficiency” and “training safety,” as compiled in Table 3.

Table 3: Simulator benefit evaluation indicators.

The questionnaire was designed using the Delphi method to interview 4 relevant experts who had many years of practical simulator experience. This expert group developed the criteria for the impact of the evaluation and the hierarchy structure, completed the design, and implemented the survey questionnaire. According to the questionnaire, the results demonstrate the weight of each item with regard to simulator efficiency and training security, based on their practical experience and perspectives. This study applied the aggregated value using the 2-tuple linguistic representation model, comparing each factor at each level from the experts. The comparison matrix of impacting factors is shown in Table 4, and the questionnaire results for each simulator are shown in Table 5.

Table 4: Impacting factor comparison matrix.
Table 5: Expert evaluation of simulator benefit.

4.2. Solution Based on the Traditional AHP Method
4.2.1. Weighting Calculation

To calculate the weight of each benefit evaluation index, the AHP method was used to analyze the weight for the impacting criteria, based on the pairwise comparisons of the factors in Table 4. The average weights are listed in Table 6. In order to examine the consistency and efficiency of this questionnaire, this study used the AHP consistency ratio method to test the matrix. Consistency was obtained when CR value < 0.1.

Table 6: Pairwise comparison matrix of impacting factors.

This paper used Expert Choice 2000 to calculate the weight of each impacting criterion and CR value. According to the calculations, the CR value for this matrix was 0.05, which means that the questionnaire results are consistent. After calculation of the weights, this study found that the indicator “reduce total casualties” was the most important factor in the “training security” dimension. The weight was 0.369, and that of the second most important factor, “increase training safety,” was 0.297. In the “simulator efficiency” dimension, “involvement of weapon in the victory or defeat” was the most important factor, and its weight was 0.146. The second highest priority factor was “necessity to maintain,” with a weight of 0.063. Other weights from high to low were “increase staff familiarity to the equipment” (0.053), “improve return on investment level” (0.038), and “reduce training cost” (0.034).

4.2.2. Simulator Benefit Analysis

According to the results of Table 5, the average value of each evaluation dimension was calculated and multiplied by the impacting criteria weights from Section 4.2.1. Then, the AHP method was used to obtain the weighted evaluation impacting factors of the simulator. For example, the evaluation values of “involvement of weapon in the victory or defeat” of simulator A were 5, 5, 5, and 4, respectively. After computing the arithmetic mean, the rated value was 5; further, multiplied by a weight of 0.146 for “involvement of weapon in the victory or defeat,” the calculated value was 0.73. Other methods of computing were the same as with simulator A, and the weighted averages of the impacting factors of simulators are shown in Table 7.

Table 7: The weighted average of impacting factors of the simulators.
4.3. Solution Based on the Proposed Method

In this section, the proposed method integrates AHP, IPA, and the 2-tuple fuzzy linguistic representation model to evaluate the benefit of 10 simulators, based on Steps 1 to 7 in Section 3, as follows.

Step 1-2 (decide on the evaluation indicators, and complete questionnaire design and questionnaire). This paper used the Delphi method to assess the benefits of training simulators by engineers with over 5 years of experience to define the evaluation index and complete the questionnaire design and questionnaire. The survey results are shown in Tables 4 and 5.

Step 3 (calculate evaluation benefit values). According to the survey results of Tables 4 and 5, this study adopted the 2-tuple fuzzy linguistic representation model, using (4) to execute semantic conversion of the questionnaire results from Step 2 and compute the arithmetic mean of the impacting criteria pairwise comparison and simulator benefit evaluation.

Step 4 (calculate the weight of assessment indicators). The weight value is calculated based on the impacting criteria pairwise comparison matrix, after calculating the weight and consistency ratio of each impacting factor using Expert Choice 2000. The CR value was 0.06, which means that the result of this questionnaire was consistent. In the “training security” dimension, the most important factor was “reduce total casualties,” and the weight was 0.375. The weight of the second most important factor, “increase training safety,” was 0.290. Conversely, for the “simulator efficiency” dimension, the most important factor was “involvement of weapon in the victory or defeat,” and its weight was 0.154. The “necessity to maintain” was the second highest priority factor; its weight was 0.061. Other weights from high to low were “increase staff familiarity to the equipment” (0.049), “improve return on investment level” (0.036), and “reduce training cost” (0.035).

Step 5 (prioritize overall evaluation benefits). In order to better understand the priority of each simulator, this study used the arithmetic means from the simulator benefit evaluation from Step 3 and multiplied them by the impacting criteria weights from Step 4; then, the weighted average of the simulator impacting criteria was calculated. For example, the arithmetic means of “involvement of weapon in the victory or defeat” of simulator A were , , , and respectively. After computing the arithmetic mean per (5), the rated value was ; furthermore, multiplying with the weight 0.154 of the “involvement of weapon in the victory or defeat” factor, the calculated value was . Other calculations were the same as for simulator A, and the weighted averages of the simulator impacting criteria are shown in Table 8.

Table 8: The weighted average of simulator impacting criteria.

Step 6 (analysis of benefits of the simulators). This step summarized the 5 indicators of “simulator efficiency” and 2 indicators of “training security” and set “simulator efficiency” on the -axis and “training security” on the -axis. The mean value distribution was (2.06, 3.94). Finally, a 2-dimension graph was drawn (as Figure 3). The 4 quadrants are described as follows.

Figure 3: Training security and simulator efficiency analysis in the IPA model.

(1) Quadrant I (Keep up the Good Work). There are 4 simulators (B, C, D, and H) in this area. These simulators can improve personnel operation safety and decrease casualties. Further, regarding simulator efficiency dimensions, these simulators can significantly lower training costs. Investment in these simulators is beneficial, and top priority should be maintained.

(2) Quadrant II (Concentrate Here). This study places simulator A in this area, indicating that simulator A can improve personnel safety and reduce the possibility of accidental casualties. Compared with the high performance area, the performance of equipped weapons and training cost are not significant in the simulator efficiency dimension, but it could improve staff safety and decrease total casualties in the training security dimension enormously. Therefore, this area needs resources to increase training efficiency, which would also enhance overall training performance.

(3) Quadrant III (Low Priority). There are 4 simulators (E, F, G, and I) in this area; these simulators are relatively safer with regard to operating weapons. In the efficiency aspect, for respondents, there was no relevance or urgency to reduce training cost or maintain them continuously. Therefore, all simulators in this area showed little increase in safety and efficiency dimensions. Managers should reconsider distributing resources on a limited budget basis to better refine the investment efficiency of the simulators and further economize the overall training cost and budget.

(4) Quadrant IV (Possible Overkill). Only simulator J lies in this area, meaning that its performance was positively ascertained by respondents. But simulator J is not dangerous; thus, it could not have significant performance in the training security dimension. This study suggests that managers redirect the budget to maintain other simulators (e.g., money can be transferred to investments in the second quadrant) and those that have more utilities and are safer, benefiting the entire investment efficiency.

Step 7 (provide suggestions for decision-making). By Steps 5 and 6, this study obtained the priorities of the simulator benefit evaluation (Table 8) and the 2-dimension, 4-quadrant plot (Figure 3). Through the analysis of information of the proposed method, managers or decision-makers can use these suggestions to make decisions when budgets are limited with regard to the investment of simulators, increasing the efficiency of budget allocations.

4.4. Comparisons and Discussion

This study proposed an integrated AHP, IPA, and 2-tuple fuzzy linguistic representation method to evaluate 10 current training simulators for their capability and efficiency. In order to evaluate the effectiveness of the proposed method, a numerical verification of a case study was performed in Section 4.3, which compared the experimental results with the traditional AHP method. The input data are shown in Tables 4 and 5, and the differences between the traditional AHP and proposed methods can be clearly seen in Table 9.

Table 9: Comparison of AHP and proposed methods.

According to the comparison of research methods (Table 9), there are several advantages of the proposed method.

(1) Adding Security Considerations for the Evaluation. Today, countries primarily use simulators to replace military exercise training methods to avoid unnecessary casualties from training operations, emphasizing the importance of training safety and lives. However, the traditional research method only considers the efficiency of the simulator; it is not suitable for practical situations. The proposed method considers not only end-use efficiency of the simulator itself but also “training security” as the primary consideration criterion, improving the results toward reaching the goal and conforming them to the actual needs of managers and users, when using simulator training.

(2) Providing an Effective Resource Allocation Strategy for Managers. This study provides the same important sorting with AHP and offers a method to classify and analyze each simulator by performing an analysis in the “training security” and “simulator efficiency” evaluation principle using IPA methods. Simulators H, C, B, and D should maintain their budgets instead of increasing them to avoid wasting investment. Simulators E, G, F, and I can be considered over whether the budget is sufficient with regard to retrenching them. Then, the budget of simulator J can be transferred to simulator A. Therefore, if the resources are sufficient, resources can be invested into all simulators pragmatically. When the budget is insufficient, resources can be distributed according to the results of the study.

In summary, the proposed method can correctly analyze the features of each region and the characteristics of the simulator, guiding the manager on use of the simulator and investment follow-up. It simplifies the actions of the manager and clarifies how available resources should be distributed and how the entire benefit investment of the simulator should be determined when the budget is insufficient.

(3) Full Concern of the Available Information. The traditional AHP method collects answers on the parameters that can solve problems in the operation process. This subtle information is easy to vanish or be ignored through calculating the arithmetic mean. From Table 5, in grading the “involvement of weapon in the victory or defeat” item, the average value of simulators E, F, and G is also 5, and the decision maker does not have the complete information. This might cause mistakes in subsequent calculations.

The study solves this problem by using the characteristic of the 2-tuple fuzzy linguistic representation model: in the “involvement of weapon in the victory or defeat” item, the values of simulators E, F, and G are , , and , respectively. Therefore, the proposed method provides complete information, preventing incorrect analysis due to information that is lost in the transfer process of the questionnaire and providing the correct and appropriate information for the manager.

(4) Reducing Repetition Rate of the Benefit Ranking. Based on the results in Table 9, the values of simulators E and G were both 4.65 using the traditional AHP method; therefore, the sort repetition rate is 20%. But, the sorting repetition rate is 0 in this study. Our new method efficiently reduces the 20% repetition rate in the benefit ranking.

5. Conclusions

With the constant development of modern technology and rising expenses of advanced weapon systems and equipment, implementing simulator training instead of exercises with real soldiers is becoming the major trend in most countries. Therefore, the efficiency of simulators and its investment worthiness continue to be maintained with regard to training budget and resource allocation issues. However, in the past, the evaluation of training simulator performance usually focused on reducing training costs, not human life and training security. For this reason, such results as a whole have not quite been objective. To overcome this bias, this paper provided a method that integrated AHP, IPA, and the 2-tuple fuzzy linguistic representation model to assess simulator benefits. The proposed method has 4 advantages: it (1) adds security considerations in the evaluation, (2) provides an effective resource allocation strategy, (3) is fully concerned with the available information, and (4) reduces the repetition rate of important sorting. By verifying the solving results in Section 4.3, this study has ensured that this method precisely calculates and analyzes the benefit of training simulators.

These advantages can determine the priority of simulators based on training needs and suggest the correct policy for managers to invest in the more profitable simulators within budget constraints. It can also reduce redundant investments, improve the efficiency of simulators, and procure long-term developments in training energy. Furthermore, it can accurately predict the priority of simulators and provide the correct information for managers, allow simulators to maximize the return on investment on a limited budget, and reduce training casualties, upgrading the quality of training and improving the overall efficiency of simulators.

Conflict of Interests

The authors declare that there is no conflict of interests regarding the publication of this paper.


The authors would like to thank the Ministry of Science and Technology of the Republic of China, for financially supporting this research under Contract NSC 102-2623-E-145-001-D and MOST 103-2410-H-145-002.


  1. M. Tavana and A. Hatami-Marbini, “A group AHP-TOPSIS framework for human spaceflight mission planning at NASA,” Expert Systems with Applications, vol. 38, no. 11, pp. 13588–13603, 2011. View at Publisher · View at Google Scholar · View at Scopus
  2. M. Cong, D. Liu, Y. Du, H. Y. Wen, and Y. H. Wu, “Application of triune parallel-serial robot system for full-mission tank training,” Industrial Robot, vol. 38, no. 5, pp. 533–544, 2011. View at Publisher · View at Google Scholar · View at Scopus
  3. C. K. Lam, K. Sundaraj, and M. N. Sulaiman, “A review of computer-generated simulation in the pedagogy of cataract surgery training and assessment,” International Journal of Human-Computer Interaction, vol. 29, no. 10, pp. 661–669, 2013. View at Publisher · View at Google Scholar · View at Scopus
  4. N. Ivankovic, D. Rajic, M. Ilic, M. Vitorovic-Todorovic, and N. Pajic, “Testing of the efficiency of military devices for personal respiratory protection in relation to sub-micron particles of biological agents,” Digest Journal of Nanomaterials and Biostructures, vol. 7, no. 3, pp. 1089–1095, 2012. View at Google Scholar · View at Scopus
  5. J. Sullivan, J. H. Yang, M. Day, and Q. Kennedy, “Training simulation for helicopter navigation by characterizing visual scan patterns,” Aviation Space and Environmental Medicine, vol. 82, no. 9, pp. 871–878, 2011. View at Publisher · View at Google Scholar · View at Scopus
  6. J. J. Vogel-Walcutt, L. Fiorella, and N. Malone, “Instructional strategies framework for military training systems,” Computers in Human Behavior, vol. 29, no. 4, pp. 1490–1498, 2013. View at Publisher · View at Google Scholar · View at Scopus
  7. Z. Wang and J. Wang, “Guided bomb release planning based on Monte Carlo in a distributed virtual environment,” Aeronautical Journal, vol. 117, no. 1192, pp. 585–602, 2013. View at Google Scholar · View at Scopus
  8. M. Pelakauskas, A. Auzans, E. A. Schneider, and A. H. Tkaczyk, “Autonomous dynamic decision making in a nuclear fuel cycle simulator,” Nuclear Engineering and Design, vol. 262, pp. 358–364, 2013. View at Publisher · View at Google Scholar · View at Scopus
  9. M. Itoh, T. Horikome, and T. Inagaki, “Effectiveness and driver acceptance of a semi-autonomous forward obstacle collision avoidance system,” Applied Ergonomics, vol. 44, no. 5, pp. 756–763, 2013. View at Publisher · View at Google Scholar · View at Scopus
  10. N. Goode, P. M. Salmon, and M. G. Lenné, “Simulation-based driver and vehicle crew training: applications, efficacy and future directions,” Applied Ergonomics, vol. 44, no. 3, pp. 435–444, 2013. View at Publisher · View at Google Scholar · View at Scopus
  11. M. M. Amick, M. Kraft, and R. Mcglinchey, “Driving simulator performance of veterans from the Iraq and Afghanistan wars,” Journal of Rehabilitation Research and Development, vol. 50, no. 4, pp. 463–470, 2013. View at Publisher · View at Google Scholar · View at Scopus
  12. T. L. Saaty, The Analytic Hierarchy Process, McGraw-Hill, New York, NY, USA, 1980. View at MathSciNet
  13. T. L. Saaty, “How to make a decision: the analytic hierarchy process,” European Journal of Operational Research, vol. 48, no. 1, pp. 9–26, 1990. View at Google Scholar
  14. M. Braglia, “MAFMA: multi-attribute failure mode analysis,” International Journal of Quality and Reliability Management, vol. 17, no. 9, pp. 1017–1033, 2000. View at Publisher · View at Google Scholar · View at Scopus
  15. M. Bosch-Mauchand, A. Siadat, N. Perry, and A. Bernard, “VCS: value chains simulator, a tool for value analysis of manufacturing enterprise processes (a value-based decision support tool),” Journal of Intelligent Manufacturing, vol. 23, no. 4, pp. 1389–1402, 2012. View at Publisher · View at Google Scholar · View at Scopus
  16. Z. P. Ding, S. K. Srivastava, D. A. Cartes, and S. Suryanarayanan, “Dynamic simulation-based analysis of a new load shedding scheme for a notional destroyer-class shipboard power system,” IEEE Transactions on Industry Applications, vol. 45, no. 3, pp. 1166–1174, 2009. View at Publisher · View at Google Scholar · View at Scopus
  17. H. Lu, G. Yi, J. Tan, and Z. Liu, “Collision avoidance decision-making model of multi-agents in virtual driving environment with analytic hierarchy process,” Chinese Journal of Mechanical Engineering, vol. 21, no. 1, pp. 47–52, 2008. View at Publisher · View at Google Scholar · View at Scopus
  18. J. A. Martilla and J. C. James, “Importance-performance analysis,” Journal of Marketing, vol. 41, no. 1, pp. 77–79, 1977. View at Publisher · View at Google Scholar
  19. C.-C. Chou and J.-F. Ding, “Application of an integrated model with MCDM and IPA to evaluate the service quality of transshipment port,” Mathematical Problems in Engineering, vol. 2013, Article ID 656757, 7 pages, 2013. View at Publisher · View at Google Scholar · View at Scopus
  20. W.-C. Lin, “Balanced scorecard and IPA enables public service in township management: local government performance,” Lex Localis, vol. 11, no. 1, pp. 21–32, 2013. View at Google Scholar · View at Scopus
  21. S.-H. Chen, “Improvement strategies for the tools and techniques of quality improvement: utilization of a performance evaluation matrix in the Taiwanese high-tech industry,” Human Factors and Ergonomics in Manufacturing & Service Industries, vol. 22, no. 4, pp. 340–350, 2012. View at Publisher · View at Google Scholar · View at Scopus
  22. L.-H. Ho, S.-Y. Feng, Y.-C. Lee, and T.-M. Yen, “Using modified IPA to evaluate supplier's performance: multiple regression analysis and DEMATEL approach,” Expert Systems with Applications, vol. 39, no. 8, pp. 7102–7109, 2012. View at Publisher · View at Google Scholar · View at Scopus
  23. F. Herrera and L. Martinez, “A 2-tuple fuzzy linguistic representation model for computing with words,” IEEE Transactions on Fuzzy Systems, vol. 8, no. 6, pp. 746–752, 2000. View at Publisher · View at Google Scholar
  24. J.-H. Wang and J. Y. Hao, “Fuzzy linguistic PERT,” IEEE Transactions on Fuzzy Systems, vol. 15, no. 2, pp. 133–144, 2007. View at Publisher · View at Google Scholar · View at Scopus
  25. J. M. Moreno, J. M. M. del Castillo, C. Porcel, and E. Herrera-Viedma, “A quality evaluation methodology for health-related websites based on a 2-tuple fuzzy linguistic approach,” Soft Computing, vol. 14, no. 8, pp. 887–897, 2010. View at Publisher · View at Google Scholar · View at Scopus
  26. L. Martínez, “Sensory evaluation based on linguistic decision analysis,” International Journal of Approximate Reasoning, vol. 44, no. 2, pp. 148–164, 2007. View at Publisher · View at Google Scholar · View at Scopus
  27. K.-H. Chang and T.-C. Wen, “A novel efficient approach for DFMEA combining 2-tuple and the OWA operator,” Expert Systems with Applications, vol. 37, no. 3, pp. 2362–2370, 2010. View at Publisher · View at Google Scholar · View at Scopus
  28. L. Martínez, J. Liu, D. Ruan, and J.-B. Yang, “Dealing with heterogeneous information in engineering evaluation processes,” Information Sciences, vol. 177, no. 7, pp. 1533–1542, 2007. View at Publisher · View at Google Scholar · View at Scopus
  29. T. L. Saaty, “Rank from comparisons and from ratings in the analytic hierarchy/network processes,” European Journal of Operational Research, vol. 168, no. 2, pp. 557–570, 2006. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  30. A. Ray, B. Sarkar, and S. Sanyal, “The TOC-based algorithm for solving multiple constraint resources,” IEEE Transactions on Engineering Management, vol. 57, no. 2, pp. 301–309, 2010. View at Publisher · View at Google Scholar
  31. L. A. Zadeh, “The concept of a linguistic variable and its application to approximate reasoning—I,” vol. 8, no. 3, pp. 199–249, 1975. View at Google Scholar · View at MathSciNet
  32. K.-H. Chang, “A more general risk assessment methodology using a soft set-based ranking technique,” Soft Computing, vol. 18, no. 1, pp. 169–183, 2014. View at Publisher · View at Google Scholar · View at Scopus