Table of Contents Author Guidelines Submit a Manuscript
Mathematical Problems in Engineering
Volume 2015 (2015), Article ID 851303, 13 pages
Research Article

QFD Based Benchmarking Logic Using TOPSIS and Suitability Index

1Department of Architectural Engineering, Dankook University, 126 Jukjeon-dong, Yongin-si, Gyeonggi-do 448-701, Republic of Korea
2Department of Architecture, Kyung Hee University, 1732 Deogyeong-daero, Giheung-gu, Yongin-si, Gyeonggi-do 446-701, Republic of Korea

Received 10 April 2015; Accepted 26 July 2015

Academic Editor: Mohamed Marzouk

Copyright © 2015 Jaeho Cho et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.


Users’ satisfaction on quality is a key that leads successful completion of the project in relation to decision-making issues in building design solutions. This study proposed QFD (quality function deployment) based benchmarking logic of market products for building envelope solutions. Benchmarking logic is composed of QFD-TOPSIS and QFD-SI. QFD-TOPSIS assessment model is able to evaluate users’ preferences on building envelope solutions that are distributed in the market and may allow quick achievement of knowledge. TOPSIS (Technique for Order of Preference by Similarity to Ideal Solution) provides performance improvement criteria that help defining users’ target performance criteria. SI (Suitability Index) allows analysis on suitability of the building envelope solution based on users’ required performance criteria. In Stage 1 of the case study, QFD-TOPSIS was used to benchmark the performance criteria of market envelope products. In Stage 2, a QFD-SI assessment was performed after setting user performance targets. The results of this study contribute to confirming the feasibility of QFD based benchmarking in the field of Building Envelope Performance Assessment (BEPA).

1. Introduction

(1) Theory of QFD and Benchmarking. Various ways of utilizing QFD have been continuously studied in the construction industry. QFD is one of Quality Management’s techniques to deal with customer needs and expectations more systematically for achieving the most important objective of a construction company, satisfaction of clients [1, 2]. Today, QFD has been widely adopted not only by the manufacturing industry, but also among various other disciplines [3].

A lot of QFD tools have been made to examine decision-making tools as a comparison analysis between performance of products and experience knowledge gained from the current project [4]. Application of benchmarking in QFD is one of many utilization purposes and can be used at the final stage. A QFD after design stage can be used as a tool to make comparisons between competitors and to gain knowledge of the expectations of end users to be used in forthcoming projects [2].

Benchmarking has been defined as a systematic approach of measuring one’s performance against that of recognized leaders with the purpose of determining best practices for continuous improvement [5]. It is used to measure performance using a specific indicator resulting in a metric of performance that is then compared to others [6]. Modern industries are increasingly integrating benchmarking in businesses with their strategic planning initiatives in order to gain a competitive edge in the global market, maintain their market shares, and acquire world-class standards and recognition [7, 8].

A benchmarking process should contain at least three steps: collect a reasonably large database, obtain performance information, and conduct comparison analysis [9]. Performance assessment of the market product and storage of such assessment information is a basic procedure for preparation of benchmarking based on QFD.

(2) State of the BEPE (Building Envelope Performance Evaluation). There is a global trend in understanding and evaluating the overall performance of buildings. In several countries (Japan, United States, Canada, EU countries, etc.) such programs have been or are being developed in an attempt to assess the issues that influence the performance of the building [10]. The goal of BPE (Building Performance Evaluation) is to improve the quality of decisions made at every phase of the building life cycle, that is, from strategic planning to programming, design, and construction, all the way to facility management [11].

The performance assessment, including environmental elements and sustainability, should be reflected in the field of BPE as well. Building envelopes, as the interface between interior space and exterior environment, serves the function of weather and pollution exclusion and thermal and sound insulation [12]. Building envelope has multiple performance items. The designer and the user should make design decisions by considering the priority of multiple performance items. For this reason, the product of building envelope is a very appropriate subject on QFD.

The fundamental performance required for building envelopes is defined by the following criteria: (1) thermal performance, (2) moisture protection, (3) visual, (4) sound, (5) safety, (6) maintenance access, (7) health and indoor air quality, (8) durability and service life expectancy, (9) maintainability and repairability, and (10) sustainability [13].

High-performance sustainable facades can be defined as exterior enclosures that use the least possible amount of energy to maintain a comfortable interior environment, which promotes the health and productivity of the building’s occupants [14]. Although there is a wide selection of optimal designs the performance of their multiattributes should be taken into account. This requires decision making process involving project stakeholder to compromise requirements [15].

Lack of communication and integration has been recognized as a crucial problem during the design stage. Poor communication and integration render the achievement of an optimal design difficult, as well as a time-consuming process [16, 17]. This problem tends to lead to unclear instructions, additional work, progress delay, project delay, and poor quality of design solutions [18, 19].

(3) Current QFD Barriers and Proposed Approach. Currently implemented QFD is recognized as an excellent tool that allows comprehensive assessment on quality in consideration of multiattributes for performance and gathers project stakeholder requirements [2]. However, there are still limitations for users in assessing performance by using QFD, reutilizing the results from such assessments and analyzing users’ quality satisfaction [20]. In terms of benchmarking, recommendations on suitable solutions for projects depend on the empirical decision of designers and professional engineers. Mohsini pointed out a problem that is ignoring interdependency relationship of the individual evaluation systems in a matter of performance assessment [21].

The market’s subjective assessment on novel technology and unfamiliar usability excludes new chance to improve the performance [22]. Moreover, performance criteria, as user satisfaction, have not been proposed. Stakeholder’s satisfaction is a key factor in project success, and a project cannot be deemed as successful until it is completed [23]. Since satisfaction is subjective measurement, it is rarely used in the performance measurement of stakeholders [24].

Abovementioned problems are also applied in building envelope solutions. In the designing stage, benchmarking an environment for the best, good, better, and standard technology in the market has not been established when users take into consideration multiattribute performances for building envelopes. Also, user satisfaction in current projects has been evaluated in a subjective manner.

For this, firstly, this study normalizes and standardizes the quality information on envelope in the market based on QFD. Such information on quality assessment allows designers and users to facilitate the benchmarking process to survey envelope design solutions in market. Performance assessment for market products should be preceded above all in order to establish the benchmarking environment. Then, the users can set target performance criteria through the benchmarking of the market solution.

Secondly, this study, analyzes the Suitability Index (SI), which presents the level of user satisfaction calculated by comparison between products and required performance criteria (RPC). The Similarity Index is able to recommend the better solution to users through confirmation of satisfaction level.

2. Research Object, Scope, and Procedures

The purpose of this study is to propose QFD based benchmarking logic for building envelope solutions. QFD-TOPSIS performs the performance assessment and the performance benchmarking for the market product by defining users’ order of function priority; QFD-SI analyzes the quality suitability based on users’ requirement.

The range of this study includes investigation on validity and applicability of benchmarking logic through a simple case study. Establishment of QFD knowledge management systems for Building Envelope Performance Assessment is subjected to future study. The study was conducted in the following procedure.(1)Set the approach and purpose of the study.(2)Investigate studies related to decision-making methods and benchmarking application in QFD.(3)Propose the fundamental theory of QFD.(4)Establish benchmarking logic based QFD.(5)Propose QFD-TOPSIS model in which TOPSIS is combined to QFD.(6)Propose QFD-SI model in which SI is applied to QFD in order to analyze users quality suitability.(7)Validity of benchmarking logic through case analysis.(8)Draw conclusion of this study.

3. Literature Review

QFD has been recognized as a useful tool for technical realization of clients’ subjective requirements and measurement of clients’ satisfaction [2, 25]. Today, QFD has been widely adopted not only by the manufacturing industry, but also among various other disciplines [3]. The extent of areas where QFD has been researched has become so exhaustive that Carnevalli and Miguel investigated the research done in QFD as a research topic itself [20].

Harding et al. suggested an information model connecting the market product and users’ requirement by utilizing QFD [26]. The study has become the concept model of QFS-based benchmarking logic. Singhaputtangkul et al. proposed a QFD assessment system for a building envelope [22]. Li et al. suggested a QFD system combining a fuzzy-TOPSIS that is used in multicriteria decision-making [27].

Furthermore, QFD has been studied in combination with various decision-making models. The following studies are examples: the decision-making model combining QFD and fuzzy theory [2832], the decision-making model of fuzzy based QFD combined with ANP [33], the decision-making model combining QFD and ANP [33, 34], in addition to the Kano model [35], DEA model [36], Rough Set model [37], SMART (Simple Multiattribute Rating Technique) [38], conjoint analysis [39], MAV [40], and FAHP (Fuzzy Analytic Hierarchy Process) model [32], which were all utilized in QFD as quality assessment methods.

Such assessment methods have focused on relative comparisons among products and thereby carry a weakness of difficulty in utilization in the market benchmarking. In addition, the evaluation methods involving complex calculation processes are becoming a fundamental problem to lower the utilization of QFD.

Meanwhile, the benchmarking studies using QFD were being conducted. Benchmarking the application of QFD is a system engineering approach for continuously evaluating and measuring current operations (system, process, product, or service) and comparing them to “best-in-class” operations [7]. In particular, with respect to a rapidly changing market, the incorporation of the new product development risk, the competitors’ benchmarking information, and the feedback information into the network model may be considered as a novel contribution in QFD literature [34].

The benchmarking approach with QFD was studied in the automobile industry [41]. Benchmarking on QFD requires information collection, information analysis, and update of recent products. Hence, QFD based knowledge management systems and decision-making methods are expected to be closely related in QFD based benchmarking.

Despite of the continuous studies up to date on QFD, reviews on the literature have noted that there are difficulties in utilization of quality planning and benchmarking by QFD [20]. It was found that the major methodological difficulties are related to the stage of elaborating quality matrixes (almost 80% of the mentions) such as “interpreting the customer’ voice” [41], “identifying the most important customer demands” [42], and “project decision-making, since correlations among the demands are not clear” [43]. It was noted that reducing the methodological difficulties in developing the quality matrix is a key factor in encouraging and expanding the use of QFD [20]. The detail of those difficulties in utilization of QFD has been described in [20, 41, 42].

In particular, there is an issue of having different results from quality assessments depending on the kinds of decision-making model applied for HOQ. Therefore, following several demands are required to be considered in order to improve QFD usability in benchmarking. First, the quality assessment in QFD should be excluded subjectively and obtained objectively. Second, the decision-making model for QFD should be as simple as possible, considering users usability. Third, the comparison on the performance upon the users’ priority for function should be practicable since the market products carry multiattribute performances. Forth, the quality suitability should be confirmed based on users’ RPC. Sections 5, 6, and 7 in this study propose QFD based benchmarking logic and formula model in consideration of above the 4 improvement demands.

4. QFD Theory

Quality function deployment (QFD) is a technique that deals with customer needs and expectations in a more systematic nature in order to achieve the most important objectives of a construction company and satisfaction of clients [2]. The core of the QFD method is a matrix base commonly referred to as the “house of quality” (HOQ), a 2D matrix that displays customer’s needs, also referred to as the WHAT, and the organization’s technical responses to these needs, also referred to as the HOW.

Each of the customer needs for the WHAT can be cross-checked against the related design and product response elements of the HOW. The core matrix of QFD or HOQ is illustrated in Figure 1 [44].

Figure 1: QFD matrix chart.

A flow chart depicting the steps involved in QFD is provided in Figure 2 [3].(1)WHATs: the primary input in the HOQ is a prioritized list of basic customer demands (requirements and needs).Each demand is documented as a WHAT and prioritized as represented by in Figure 2.(2)HOWs: HOWs are the design (or technical or product) characteristics that serve to meet the WHATs. For each WHAT, a corresponding HOW is identified as represented by in Figure 2.(3)Relationship matrix: it indicates how product characteristics or decisions affect the satisfaction of each customer need. It consists of relationships existing between each WHAT and each HOW attribute.(4)Absolute weights and ranking of HOWs: it contains results of the prioritization of product characteristics to satisfy customer requirements.It represents the impact of each HOW attribute on the WHATs and is the final step before ranking of the weights for decision-making as shown in Figure 2.(5)Correlation matrix: it is the roof of the HOQ and represents the interdependencies among HOWs as shown in Figure 2.

Figure 2: QFD-HOQ concept model.

5. Benchmarking Logic Based QFD

QFD benchmarking logic generates state of performance distribution on market products in order to improve performance of current solution. Performance criteria of products are determined by technical characteristics of “HOW” in HOQ matrix. Designers and users can ensure an improvement possibility of performance through comparison on multiattribute performance of HOQ-HOW.

The subject for quality comparison is products or the latest solutions that are currently used in the market. Benchmarking on new technology provides a good opportunity for raising improvement of performance. An important advantage of the benchmarking process is that it allows the users to confirm the level of improvement for performance beyond the simple copycat performance. Figure 3 presents benchmarking logic for users’ quality planning in this study.

Figure 3: Benchmarking logic based on QFD and TOPSIS.

The benchmarking process is performed by following procedure.(1)Requirements of designers and users (WHATs) are defined and the priority for technical characteristics (HOWs) is set.(2)A product solution that is currently considered as standard quality is compared with products in market by using QFD-TOPSIS model.(3)Designers and users set performance target with benchmarking of a product in the market that is higher than the current solution. This will be the required performance criteria (RPC).(4)SI of the market products is calculated by similarity of RPC and product actual performance (PAP).(5)Users select the most suitable solution for the current project: the solution with its total SI that is the nearest to 0, with lower SI-NC and higher SI-IC.


The overall customer satisfaction level is derived from multiple customer attributes that are generally conflicting with one another. Therefore a multiattribute value (MAV) function is very well suited for mathematical formulations in QFD [45].

TOPSIS technique uses a multicriteria decision-making method to analyze preferences among alternatives. TOPSIS is usually employed to analyze the relative comparison of alternatives [46, 47]. It calculates scores that are a distance between a positive ideal solution (PIS) and negative ideal solution (NIS). Hwang and Yoon [48] originally proposed the TOPSIS method in order to identify solutions from a finite set of alternatives. The detailed traditional TOPSIS solution can be found in Chan and Wu [29].

TOPSIS analyzes the product preference in consideration of the multiproperties of a product, service, system, and so forth. Currently, the preference can refer to scores obtained from the quality evaluation. The score range is distributed between 0 and 1. The quality is more improved as the preference score approaches closer to 1.

In this study, value is normalized to an absolute value while original TOPSIS models use relative values. Product quality can be defined as a grade and the value in turn normalized to between 0 and 1 by dividing the total distance. Figure 4. presents the concept of normalization for product performances possessing different measurements for evaluations.

Figure 4: TOPSIS evaluation model in multicriteria decision-making.

“WHATs” and “HOWs” possess 1 : 1 relationship in QFD-TOPSIS model. When one “WHAT” has two more “HOWs,” 1 : 1 relationship is kept by accepting duplication of “WHAT.” When one “HOW” has two more “WHATs,” a statement is made by summarizing “WHAT” as one “WHAT.” One of the reasons why keeping 1 : 1 relationships between “WHAT” and “HOW” in an independent manner is because QFD-TOPSIS evaluation models only evaluate product performance.

Unlike the existing QFD studies used in function evaluation methods, performance evaluation is solely one of the best approaches securing the objectivity measurement. Furthermore, the relationship with 1 :  or  : 1 between “WHAT” and “HOW” carries a risk intervention of subjective evaluation, the most challenging issue in terms of QFD utilization.

In the practical environment, one requirement (WHAT) can have one or more technical characteristics (HOWs). Another case, one technical characteristic (HOW) can have one or more requirements (WHATs). The first step for securing the objectiveness is to get rid of the subjective matrix relationship between WHATs and HOWs and to recognize a technical characteristic as a unique being.

Technical characteristic must have PIV and NIV. This is indeed an objective fact in terms of engineering. Therefore, the technical characteristics can be changed as a unique being in relation to the functional requirements. A technical characteristic can be matched to a functional requirement; however, the functional requirement here is allowed to be defined with multiple subfunctions.

As a result, technical characteristics have distinct functional requirement, and the PIV and the NIV are defined based on the corresponding functional requirement. All technical characteristics present the objective fact of PIV and NIV based on 1 : 1 relationship between WHATs and HOWs. The assessment results will always be consistent regardless of the replacement of the field expert since they are based on the objective fact.

The next step following the 1 : 1 relationship is to define the priority of functional requirements with technical characteristics. Ranking the priority of technical characteristics is the domain of a user’s subjective judgement.

The procedure of QFD-TOPSIS is expressed in the following steps.

Step 1 (generate technical measurement criteria (HOWs)). This step defines performance criteria (, ) for evaluation of products quality. The performance criteria are the technical characteristics for realization of users’ requirements.

Step 2 (identify customer requirements (WHATs)). This step defines the users’ functional requirements.
The requirements () are described as technical “HOW” of the products.
The definition of “WHAT” may go through repetitive revision in order to keep a 1 : 1 relationship between “WHATs” and “HOWs.”

Step 3 (define the importance rating of customer requirements ()). Users and designers measure the priority of “WHAT” from a scale of 1 to 10 points.
In turn, align in order according to the priority of “WHAT.”

Step 4 (determine initial technical performance ratings of HOWs). The actual measurement of the product is determined by using a standardized scale that is currently used in the market. The type of performance measurement is one of distance scale, ordinal scale (9, 7, 5, and 3 point), ratio scale, and a binary value (True/False). Initial technical performance rating is , (product ID: , technical attribute: ). In turn, the Positive Ideal Value (PIV) and Negative Ideal Value (NIV) are defined as where , , and , .

Step 5 (weight of the technical characteristics HOWs). The performance is compared with each pair item referring to the priority of “WHAT” that is determined in Step 3. The value of pair comparison is determined in scores for relative advantage regarding and , and , and (i.e., 1 point is added when the priority is the same; 2 points are added when the priority is higher by 1 point; 3 points are added when the priority is higher by 2 points). The weight for is calculated by the following:

Step 6 (determine the relationship between WHATs and HOWs). , the actual performance values of products, are matched in a 1 : 1 relationship of “WHATs” and “HOWs.”

Step 7 (competitive rating HOWs). The performance evaluation on market product is recorded. The decision-making matrix for the ()th technical characteristic HOW of the ()th product is shown in the following:(, is the number of the product).
(, is the number of the technical characteristic).
The is multiplied by weight to normalize the value, as follows:The is normalized as the unit vector in which the maximum value is 1.
The normalized value () is calculated as follows:

Step 8 (analysis preference and benchmarking). In turn, calculate that is the sum of distance between the value () and NIV (Negative Ideal Value: 0) using the following equation:

Finally, calculate the closeness to the ideal solution.

The closeness of product is defined as

The closer of a product to 1, the higher the users’ preference. Users confirm the difference in preference between the current solution and market products by referring the preference score. Furthermore, users will benchmark the products with relatively higher preference compared to the current solution.

Step 9 (product performance targeting). Users set feasible targets to improve the performance comparing the current solution and benchmarking product.


QFD-SI recommends the most suitable products for users through a similarity analysis between RPC and product actual performance (PAP) in the market. SI becomes closer to 0 as the similarity between RPC and PAP increases. Suitability is composed of Total Suitability Index (TSI), Suitability Index based on negative criteria (SI-NC), and Suitability Index based on ideal criteria (SI-IC). Users can consider the product as the most suitable solution when TSI and SI-NC are close to 0 and SI-IC is close to 1.

After the QFD-TOPSIS procedures is complete, the process of QFD-SI is expressed in the following steps.

Step 10 (TSI (Total Suitability Index)). Assign Suitability Index () by measuring differential value between RPC and PAP.
TSI (Total Suitability Index) for a product is estimated by the following:

Step 11 (SI-NC (Suitability Index based on negative criteria)). SI-NC is the sum of SI when it is less than 0 divided by the absolute negative maximum as a total of SI:

Step 12 (SI-IC (Suitability Index based on ideal criteria)). SI-IC is the sum of SI+ when it is greater than 0 divided by the ideal maximum as a total of SI+:

Step 13 (improvement performance ratio). The ratio represents performance improvement that is the difference between the current alternative TSI and the base alternative TSI. Figure 5 shows the concept of QFD-TOPSIS and QFD-SI models in this study.

Figure 5: QFD-TOPSIS, QFD-SI model.

8. Numerical Examples

In case study, the performance evaluation data of a building envelope is utilized in order to verify QFD-TOPSIS and QFD-SI models. Targeting performance criteria (TPC) is set based on 10 sample data, in turn the quality suitability is analyzed based on RPC.

8.1. Curtain Wall Performance Evaluation

The evaluation criteria on performance of a curtain wall in this study are composed of the following 10 items. (1) value (glazing): distance scale, (2) value (total): distance scale, (3) SHGC (summer): distance scale, (4) SHGC (winter): distance scale, (5) VT: distance scale, (6) CR: distance scale, (7) AT: distance scale, (8) Aesthetic: ordinal scale, (9) WS: ordinal scale, and (10) WT: binary scale.

The value is the measure of how much heat is transferred through the window. A lower value has a better thermal insulation performance. The performance of the value can be classified into glass itself and the glass curtain wall including the frame.

Solar Heat Gain Coefficient (SHGC) is a measure of how much solar radiation passes through the window. SHGC is expressed as a number between 0 and 1. The lower a window’s SHCG is, the less solar heat it transmits. In terms of users’ functional requirement, SHGC can be classified into SHGC (summer) and SHGC (winter). The ideal values for SHGC (summer) and SHGC (winter) are 1 and 0, respectively.

Visible transmittance is the amount of light in the visible portion of the spectrum that passes through a glazing material. A higher VT means there is more daylight in a space, which if designed properly can offset electric lighting and its associated cooling loads.

Condensation Resistance (CR) measures how well a product resists the formation of condensation. CR is expressed as a number between 1 and 100. The higher the number is, the better a product is able to resist condensation.

Air-tightness (AT) can be defined as the resistance to inward or outward air leakage through unintentional leakage points or areas in the building envelope. The amount of air leakage must be lower than the acceptable standards, 0.06 cfm/ft2, based on the ASTM E283 standards test.

The aesthetics means user aesthetic function of the envelope. Measurement on the aesthetics is flexible in relation to an individual’s subjectivity and culture. In this study, a 5-grade ordinal scale was applied to the aesthetics performance (i.e., grade 5: the best).

Wind safety (WS) is the safety of the windows against wind. Safety measurement follows standards test of ASTM E330. In this study, WS adopts 3-grade scale (i.e., grade 3: the most excellent) since a domestic standardized scale for WS is not available yet.

Water tightness (WT) is windows’ resistance against water leakage. The amount of water leakage must be 0 based on standard tests of ASTME 331. In this study, the evaluation of WT is determined as 1 (no leakage) or 0 (leakage) based on the test results.

8.2. Users’ Requirement on Curtain Wall

The weight of technical characteristics is calculated upon interactive comparisons of the performance properties. The weight is calculated by relative comparison based on users’ priority score for “WHATs.” Figure 6 is a comparison matrix and the weight of performance.

Figure 6: Weighting matrix for users performance requirements.

A–J means that A and J have the same importance. B-2 means that B is more important than F by 2. (In F column) For the performance items (A~J), the sum of each importance is calculated. Then, the importance of those performance items were normalized by using the scale of 10. The item A has the highest value of importance as 10. Each of B, C, D, and E has an importance of 4.2. Each of F, G, H, I, and J has an importance of 1.5.

The intercomparison in between performance has an advantage of securing user’s consistency in setting the priority. However, compensation for distortion of the priority score is required. This is out of the range of this study.

8.3. Preference Analysis

The 10 samples of the windows in building envelope were chosen for validation of case study. Table 1 shows a sample of the performance data for the curtain wall products. Product ID 8 was selected for the solution for the current project. Table 2 is the performance data that are normalized (see also Figure 7). The normalized performance has the value between 0 and 1. Table 3 is the results of the preference analysis based on TOPSIS. Among 10 products, the product ID 7 has the preference of 0.77 that is the nearest value to 1 (see also Figure 8). Hence, users can select the product ID 7 as the benchmarking solution.

Table 1: Initial performance data for 10 curtain wall products.
Table 2: Normalized data for 10 curtain wall products.
Table 3: Weighted data for users’ preference.
Figure 7: Normalized data for 10 curtain wall products.
Figure 8: Weighted data for users’ preference.
8.4. Benchmarking and Setting Up TPC (RPC)

The product ID 11 is a new TPC that was set in this study. The TPC was set to compare the performance between the benchmarking product ID 7 and the current product ID 8. The subject for performance improvement was targeted to insulation performance for energy saving in this case study by the authors; the value and AT become a key with regard to energy saving. User’s requirement and subjects of interest differ by the project environment. Therefore, the value and AT of the benchmarking product (ID 7) were set as TPC (ID 11). In Table 4, the product ID 11 solution presented the RPC that adjusted the performance criteria through users’ benchmarking. In Table 5, the product ID 11 solution presented the RPC in which the weight was applied.

Table 4: Comparison in initial performance data.
Table 5: Comparison in weighted data.
8.5. Suitability Index

SI of the 10 products was calculated based on the RPC of the product ID 11. It represents TSI, SI-NC, and SI-IC of the 10 products in Table 6 and Figure 9. The product ID 1 was analyzed as the solution that is the closest to 0 as the TSI presents 0.00. TSI of the product 1 has as a value of 0.00.

Table 6: Suitability Index.
Figure 9: Suitability Index for market product.

RPC and PAC are considered to completely match up together if TSI has a value of 0.00. However, such result will be occasionally obtained because the values of SI-NC and SI-IC are offset each other. For product 1, SI-NC is −0.08 and SI-IC is 0.26. Offset of these values yields the final value of TSI as 0.00 due to the weighted value.

This study proposed the feasibility of TSI, SI-NC, and SI-IC in case study. TSI, SI-IC, and SI-NC enable suitability analysis, automatic product search, optimal decision-making, and risk review based on requirement.

For this, development of the computer program using logic of TSI, SI-IC, and SI-NC is required.

8.6. Summary of Case Study

Multiple performances were analyzed by using 10 sample data. The performance items were redefined as the unique technical characteristic based on the functional requirement. The 10 samples were evaluated based on the corresponding performance criteria since performance has both PIS and NIS. In turn, benchmarking based on performance item was conducted under assumption of the project’s characteristics and user’s requirement.

TPC in relation to the envelope performance was defined, and TSI, SI-IC, and SI-NC of the 10 samples were analyzed with the TPC. The product ID 1 was selected as the most optimal alternative based on the requirements.

Therefore, this study validated the benchmarking logic in QFD by using TOPSIS and SI.

9. Conclusions

Current new technology in the market is the best subject for benchmarking for performance improvement of buildings. Designers and users can define TPC by conducting investigations on the performance status of market products. QFD benchmarking methods proposed in existing studies have been limitedly used in benchmarking utilization due to subjective evaluations on users’ requirements and functions. In this study, TOPSIS was adopted for multicriteria decision in order to improve industrial utility of QFD and secure the objectiveness of the product evaluation.(1)This case study confirmed the feasibility of benchmarking based on QFD-TOPSIS and QFD-SI in field of BEPA.(2)QFD-TOPSIS assessment model evaluates users’ preference and provides performance improvement criteria result from benchmarking of market products.(3)QFD-SI proposes suitability information that is generated to compare with users’ RPC and PAP in the market.(4)QFD based TOPSIS and SI logic in this study was confirmed to be eligible for application to the multicriteria decision-making problems that occur in broad range of the engineering industry.

For future studies, a web-based QFD knowledge system in which project stakeholders can share performance information will be studied.


The first author is Jaeho Cho.

Conflict of Interests

The authors declare that there is no conflict of interests regarding the publication of this paper.


This research was supported by a grant (15AUDP-C067809-03) from Architecture & Urban Development Research Program funded by Ministry of Land, Infrastructure and Transport of Korean government. This research was supported by the National Research Foundation of Korea (NRF) (no. NRF-2012R1A1A2043186).


  1. R. M. Fifer, “Cost benchmarking functions in the value chain,” Strategy & Leadership, vol. 17, no. 3, pp. 18–19, 1989. View at Publisher · View at Google Scholar
  2. I. Dikmen, M. Talat Birgonul, and S. Kiziltas, “Strategic use of quality function deployment (QFD) in the construction industry,” Building and Environment, vol. 40, no. 2, pp. 245–255, 2005. View at Publisher · View at Google Scholar · View at Scopus
  3. A. Bolar, S. Tesfamariam, and R. Sadiq, “Management of civil infrastructure systems: QFD-based approach,” Journal of Infrastructure Systems, vol. 20, no. 1, Article ID 04013009, 2014. View at Publisher · View at Google Scholar · View at Scopus
  4. J. Chun and J. Cho, “QFD model based on a suitability assessment for the reduction of design changes in unsatisfactory quality,” Journal of Asian Architecture and Building Engineering, vol. 14, no. 1, pp. 113–120, 2015. View at Publisher · View at Google Scholar · View at Scopus
  5. P.-C. Liao, W. J. O'Brien, S. R. Thomas, J. Dai, and S. P. Mulva, “Factors affecting engineering productivity,” Journal of Management in Engineering, vol. 27, no. 4, pp. 229–235, 2011. View at Publisher · View at Google Scholar · View at Scopus
  6. C. E. Bogan and M. J. English, Benchmarking for Best Practices: Winning Through Innovative Adaptation, Best Practices, LLC, New York, NY, USA, 1994.
  7. B. Ghahramani and A. Houshyar, “Benchmarking the application of quality function deployment in rapid prototyping,” Journal of Materials Processing Technology, vol. 61, no. 1-2, pp. 201–206, 1996. View at Publisher · View at Google Scholar · View at Scopus
  8. A. Kumar, J. Antony, and T. S. Dhakar, “Integrating quality function deployment and benchmarking to achieve greater profitability,” Benchmarking, vol. 13, no. 3, pp. 290–310, 2006. View at Publisher · View at Google Scholar · View at Scopus
  9. L. Pérez-Lombard, J. Ortiz, R. González, and I. R. Maestre, “A review of benchmarking, rating and labelling concepts within the framework of building energy certification schemes,” Energy and Buildings, vol. 41, no. 3, pp. 272–278, 2009. View at Publisher · View at Google Scholar · View at Scopus
  10. P. Fazio, H. S. He, A. Hammad, and M. Horvat, “IFC-based framework for evaluating total performance of building envelopes,” Journal of Architectural Engineering, vol. 13, no. 1, pp. 44–53, 2007. View at Publisher · View at Google Scholar · View at Scopus
  11. W. F. E. Preiser and J. C. Vischer, Assessing Building Performance, Routledge, 2012, First published by: Elsevier 2005.
  12. C. J. Kibert, Sustainable Construction: Green Building and Delivery, John Wiley & Sons, Hoboken, NJ, USA, 2nd edition, 2008.
  13. WBDG, “Building Envelope Design Guide-Curtain Walls,”
  14. A. Aksamija, Perkins+Will, Sustainable Facades, John Wiley & Sons, Hoboken, NJ, USA, 2013.
  15. J. Lee and J. Chun, “A numerical value evaluation model for the optimum design selection,” Journal of Asian Architecture and Building Engineering, vol. 11, no. 2, pp. 283–290, 2012. View at Publisher · View at Google Scholar · View at Scopus
  16. L. Sui Pheng and D. L. L. T’ng, “Factors influencing design development time of commercial properties in Singapore,” Facilities, vol. 16, no. 1/2, pp. 40–51, 1998. View at Publisher · View at Google Scholar
  17. J. Marsot, “QFD: a methodological tool for integration of ergonomics at the design stage,” Applied Ergonomics, vol. 36, no. 2, pp. 185–192, 2005. View at Publisher · View at Google Scholar · View at Scopus
  18. M. Kagioglou, R. Cooper, G. Aouad, and M. Sexton, “Rethinking construction: the generic design and construction process protocol,” Engineering Construction and Architectural Management, vol. 7, no. 2, pp. 141–153, 2000. View at Publisher · View at Google Scholar
  19. S. Austin, A. Newton, J. Steele, and P. Waskett, “Modelling and managing project complexity,” International Journal of Project Management, vol. 20, no. 3, pp. 191–198, 2002. View at Publisher · View at Google Scholar · View at Scopus
  20. J. A. Carnevalli and P. C. Miguel, “Review, analysis and classification of the literature on QFD—types of research, difficulties and benefits,” International Journal of Production Economics, vol. 114, no. 2, pp. 737–754, 2008. View at Publisher · View at Google Scholar · View at Scopus
  21. R. A. Mohsini, “Performance and building: problems of evaluation,” Journal of Performance of Constructed Facilities, vol. 3, no. 4, pp. 235–242, 1989. View at Publisher · View at Google Scholar
  22. N. Singhaputtangkul, S. P. Low, A. L. Teo, and B.-G. Hwang, “Knowledge-based decision support system quality function deployment (KBDSS-QFD) tool for assessment of building envelopes,” Automation in Construction, vol. 35, pp. 314–328, 2013. View at Publisher · View at Google Scholar · View at Scopus
  23. S. O. Ogunlana, “Construction professional's perception of critical success factors for large-scale construction projects,” Construction Innovation, vol. 9, no. 2, pp. 149–167, 2009. View at Publisher · View at Google Scholar
  24. P. Rashvand and M. Z. Abd Majid, “Critical criteria on client and customer satisfaction for the issue of performance measurement,” Journal of Management in Engineering, vol. 30, no. 1, pp. 10–18, 2014. View at Publisher · View at Google Scholar · View at Scopus
  25. M. Xie, K. C. Tan, and T. N. Goh, Advanced QFD Applications, ASQ Quality Press, Milwaukee, Wis, USA, 2003.
  26. J. A. Harding, K. Popplewell, R. Y. K. Fung, and A. R. Omar, “Intelligent information framework relating customer requirements and product characteristics,” Computers in Industry, vol. 44, no. 1, pp. 51–65, 2001. View at Publisher · View at Google Scholar · View at Scopus
  27. M. Li, L. Jin, and J. Wang, “A new MCDM method combining QFD with TOPSIS for knowledge management system selection from the user's perspective in intuitionistic fuzzy environment,” Applied Soft Computing, vol. 21, pp. 28–37, 2014. View at Publisher · View at Google Scholar · View at Scopus
  28. Y. Q. Yang, S. Q. Wang, M. Dulaimi, and S. P. Low, “A fuzzy quality function deployment system for buildable design decision-makings,” Automation in Construction, vol. 12, no. 4, pp. 381–393, 2003. View at Publisher · View at Google Scholar · View at Scopus
  29. L.-K. Chan and M.-L. Wu, “A systematic approach to quality function deployment with a full illustrative example,” Omega, vol. 33, no. 2, pp. 119–139, 2005. View at Publisher · View at Google Scholar · View at Scopus
  30. L.-H. Chen and W.-C. Ko, “Fuzzy approaches to quality function deployment for new product design,” Fuzzy Sets and Systems, vol. 160, no. 18, pp. 2620–2639, 2009. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  31. R. Y. Fung, Y. Chen, and J. Tang, “Estimating the functional relationships for quality function deployment under uncertainties,” Fuzzy Sets and Systems, vol. 157, no. 1, pp. 98–120, 2006. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  32. S. Yousefie, M. Mohammadi, and J. H. Monfared, “Selection effective management tools on setting European Foundation for Quality Management (EFQM) model by a quality function deployment (QFD) approach,” Expert Systems with Applications, vol. 38, no. 8, pp. 9633–9647, 2011. View at Publisher · View at Google Scholar · View at Scopus
  33. Y. Lin, H.-P. Cheng, M.-L. Tseng, and J. C. C. Tsai, “Using QFD and ANP to analyze the environmental production requirements in linguistic preferences,” Expert Systems with Applications, vol. 37, no. 3, pp. 2186–2196, 2010. View at Publisher · View at Google Scholar · View at Scopus
  34. H. Raharjo, A. C. Brombacher, and M. Xie, “Dealing with subjectivity in early product design phase: a systematic approach to exploit quality function deployment potentials,” Computers & Industrial Engineering, vol. 55, no. 1, pp. 253–278, 2008. View at Publisher · View at Google Scholar · View at Scopus
  35. C. Garibay, H. Gutiérrez, and A. Figueroa, “Evaluation of a digital library by means of quality function deployment (QFD) and the kano model,” Journal of Academic Librarianship, vol. 36, no. 2, pp. 125–132, 2010. View at Publisher · View at Google Scholar · View at Scopus
  36. M. Azadi and R. Farzipoor Saen, “A combination of QFD and imprecise DEA with enhanced Russell graph measure: a case study in healthcare,” Socio-Economic Planning Sciences, vol. 47, no. 4, pp. 281–291, 2013. View at Publisher · View at Google Scholar · View at Scopus
  37. Y.-L. Li, J.-F. Tang, K.-S. Chin, Y. Han, and X.-G. Luo, “A rough set approach for estimating correlation measures in quality function deployment,” Information Sciences, vol. 189, pp. 126–142, 2012. View at Publisher · View at Google Scholar · View at Scopus
  38. T. Park and K.-J. Kim, “Determination of an optimal set of design requirements using house of quality,” Journal of Operations Management, vol. 16, no. 5, pp. 569–581, 1998. View at Publisher · View at Google Scholar · View at Scopus
  39. M. E. Pullman, W. L. Moore, and D. G. Wardell, “A comparison of quality function deployment and conjoint analysis in new product design,” The Journal of Product Innovation Management, vol. 19, no. 5, pp. 354–364, 2002. View at Publisher · View at Google Scholar · View at Scopus
  40. H. Moskowitz and K. J. Kim, “QFD optimizer: a novice friendly quality function deployment decision support system for optimizing product designs,” Computers and Industrial Engineering, vol. 32, no. 3, pp. 641–655, 1997. View at Publisher · View at Google Scholar · View at Scopus
  41. D. Ginn and M. Zairi, “Best practice QFD application: an internal/external benchmarking approach based on ford motors' experience,” International Journal of Quality and Reliability Management, vol. 22, no. 1, pp. 38–58, 2005. View at Publisher · View at Google Scholar · View at Scopus
  42. W. Yan, L. P. Khoo, and C.-H. Chen, “A QFD-enabled product conceptualisation approach via design knowledge hierarchy and RCE neural network,” Knowledge-Based Systems, vol. 18, no. 6, pp. 279–293, 2005. View at Publisher · View at Google Scholar · View at Scopus
  43. R. Y. K. Fung, Y. Chen, and J. Tang, “Estimating the functional relationships for quality function deployment under uncertainties,” Fuzzy Sets and Systems, vol. 157, no. 1, pp. 98–120, 2006. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  44. S. M. Ahmed, L. P. Sang, and Ž. M. Torbica, “Use of quality function deployment in civil engineering capital project planning,” Journal of Construction Engineering and Management, vol. 129, no. 4, pp. 358–368, 2003. View at Publisher · View at Google Scholar · View at Scopus
  45. H. Moskowitz and K. J. Kim, “QFD optimizer: a novice friendly quality function deployment decision support system for optimizing product designs,” Computers & Industrial Engineering, vol. 32, no. 3, pp. 641–655, 1997. View at Publisher · View at Google Scholar · View at Scopus
  46. C. L. Hwang and K. P. Yoon, Multiple Attribute Decision Making: Methods and Appli-Cations, Springer, New York, NY, USA, 1981. View at MathSciNet
  47. Z. Xu and R. R. Yager, “Dynamic intuitionistic fuzzy multi-attribute decison making,” International Journal of Approximate Reasoning, vol. 48, no. 1, pp. 246–262, 2008. View at Publisher · View at Google Scholar · View at MathSciNet
  48. C. L. Hwang and K. P. Yoon, Multiple Attribute Decision Making: Methods and Applications, vol. 186 of Lecture Notes in Economics and Mathematical Systems, Springer, New York, NY, USA, 1981. View at MathSciNet