Table of Contents Author Guidelines Submit a Manuscript
Advances in Decision Sciences
Volume 2014, Article ID 576596, 11 pages
http://dx.doi.org/10.1155/2014/576596
Research Article

Designing and Validating a Model for Measuring Innovation Capacity Construct

1Department of Mechanical and Materials Engineering, Faculty of Engineering and Built Environment, Universiti Kebangsaan Malaysia, 43600 Bangi, Selangor, Malaysia
2Department of Marketing, Faculty of Business and Accountancy, University of Malaya, 50603 Kuala Lumpur, Malaysia

Received 21 May 2014; Accepted 17 September 2014; Published 30 September 2014

Academic Editor: Shelton Peiris

Copyright © 2014 Mahmood Doroodian et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Abstract

In today’s rapid changing and highly competitive business environment, innovation is broadly recognized as a powerful competitive weapon. Innovation is a dynamic process that needs continuous, evolving, and mastered management. Thus, companies need to monitor and measure their innovation capacity to manage the innovation process. Yet, there is lack of a psychometrically valid scale for innovation capacity construct in the current innovation literature. The purpose of this paper is to develop a reliable and valid scale of measurement for innovation capacity. To test its unidimensionality, reliability, and several components of validity, we used data collected from 175 small- and medium-sized enterprises (SMEs) in Iran and performed a series of analyses. The reliability measures, exploratory and confirmatory factor analyses, and several components of validity tests strongly support a four-dimensional scale for measuring innovation capacity. The dimensions are knowledge and technology management, idea management, project development, and commercialization capabilities.

1. Introduction

Rapid changes in technologies and globalization process altered the former rules of competition and innovation becomes more and more essential for companies to remain competitive [1, 2]. Innovation in the form of new or improved products and processes can be an important source of market and cost advantages for a firm, and it also can increase demand by product differentiation and enhance the firm’s ability to innovate [3]. Innovation is different from generating an idea or a method of doing works. Instead, innovation should be regarded as a sustainable and continuous process [4]. Companies have to commit to permanent and concurrent innovation creation as it is the critical driver in search for competitive advantage [5]. Continuous innovation creation, in fact, becomes reality for a company through an on-going and dynamic process of developing and improving new or existing products, processes, technologies capabilities, and management practices [4]. Based on the dynamic nature of innovation process, assessment of the innovation capacity becomes a key concern to ensure continuous development of these management efforts [2].

The issue of quantifying and assessing innovation process and its practices is an important and complex issue for many companies [6]. An important challenge is to measure the complex processes that affect the organization’s innovation capacity, in order that they can be optimally managed [7]. Measurement of the innovation capacity is critical for both practitioners and academics, yet the literature is characterized by a diversity of approaches, prescriptions, and practices that can be confusing and contradictory [8]. Empirical studies have found that many organizations tend to focus only on the measurement of innovation inputs and outputs in terms of cost, speed to market, and numbers of new products and disregard the processes in-between [79]. From a systemic standpoint a strong limitation exists: the activities linking the inputs to the outputs of the innovation process are not evaluated [2]. Thus, the focus of this study is on the innovation measurement based on the innovation activities and efforts rather than innovation process outputs or inputs.

The purpose of this study is to develop a reliable and valid scale for the measurement of innovation capacity based on innovation activities and to assess its psychometric properties. The remainder of this study is organized as follows. In Section 2, the literature related to innovation measurement is briefly reviewed. The methods employed for measurements, sampling, and data collection are explained in Section 3. In Section 4, the results of data analysis by both descriptive and inferential methods are described, while Section 5 provides a discussion on the findings. Finally, Section 6 presents concluding remarks.

2. Innovation Measurement

Substantial research has been conducted in the field of innovation measurement in country and firm levels [2, 32]. To consider the innovation process as inputs, activities, and outputs, most of the studies have evaluated the innovativeness of a firm based on the innovation process inputs or outputs [10, 3336]. However, this approach to innovation measurement has some drawbacks, particularly in the cases of small and medium enterprises (SMEs) and companies of developing countries.

The level of research and development (R&D) expenditures has repeatedly been used as the overall measure of innovativeness of firms [8, 9]. In actual fact, R&D is an input to innovation process which does not essentially lead to innovations [37, 38]. The R&D expenditures may have resulted in overestimation of innovativeness capability measure since it takes account of unsuccessful R&D efforts [9]. Furthermore, all new products and processes are not necessarily created in R&D laboratories [39]. Innovations can be originated from either a specific problem or a self-discovery idea, which eventually turned the idea into unexpected profitable outcome. In this case, evaluating innovativeness through R&D expenditures will underestimate the level of innovativeness [9]. In conclusion, it is notable that R&D data used as an innovation indicator tends to favour large firms compared to SMEs due to the fact that SMEs’ R&D efforts are often informal in which they may not record them [38, 40] or are infrequent [39].

One of the intermediate output measures which has repeatedly been used as the global measure of innovativeness of firms is patent data. However, patent measures inventions rather than innovations [37, 41, 42]. Innovation is the conversion of an invention into a useful and marketable new or improved product or process. Measuring innovation by using patent data risks overestimating the level of innovativeness by counting those inventions that have not been transformed into marketable innovations. [9]. Furthermore, the tendency to patent varies between industries [39]. For various reasons (e.g., high costs, difficulties in patenting process, relatively high imitation costs, etc.) some companies prefer to protect their innovations by other appropriate methods such as maintaining lead time over rivals, industrial secrecy, and technological complexity [38, 43, 44]. Since not all innovations are patentable, patent data is thus an imprecise measurement of innovation capability [9].

Past literatures indicate there are two output-based approaches in measuring a firm’s innovativeness, through innovation count and firm-based surveys. The first method can be considered as an objective approach, where the information on innovation is collected and counted from various sources such as new product/process announcements, databases, and specialized journals [9]. The second method can be regarded as a subjective approach, where surveys and interviews on innovation are undertaken across firms [9]. However, both methods have limitations; the innovation count approach, in practice, tends to favour product over process innovations [37, 38, 45] and radical innovations over incremental ones [42]. Moreover, this method excludes failed innovations and avoids any comparative analysis of innovation successes and failures.

The major drawback of firm-based surveys is that the answer rates have vital role in the significance and the representativeness of the measurement results [46]. Another disadvantage of this method is related to its methodology, which measures newness or innovativeness of a firm by asking dichotomous questions: whether firms have been involved in innovation activities or otherwise. As noted by Amara et al. [47], results of this type of questions indicated that the proportion of innovative firms has increased constantly and significantly during the last decade, about 80% in some countries. However, research findings based on this approach of innovation measurement are becoming less and less valuable since empirical studies are evermore delivering additional confirmation of prior results rather than shedding new light on the nature of innovation process and its related factors [9].

In summary, this research was initiated as a result of the abovementioned gaps in the current innovation measurement approaches, which tend to focus only on the measurement of innovation process inputs and/or outputs. This study developed a scale for innovation capacity construct based on innovation activities and efforts of a firm. This new measurement method of innovation measurement is consistent with OECD’s [3] recommendation for developing countries as “measurement exercises should focus on the innovation process rather than its output and emphasis on how capabilities efforts and results are dealt with” (p. 139).

3. Research Methodology

The innovation capacity scale was developed based on the Churchill’s [48] guidelines in developing better measures. The procedures include specific domain identification; item generation; data collection; item purification; scale development; and scale qualification, as shown in Figure 1.

576596.fig.001
Figure 1: Research methodology.
3.1. Specifying Domain of the Construct

According to Churchill [48], the first step in developing a scale for measuring a construct is to delineate what should be included in and/or excluded from the definition of the construct. To delineate the domain of the innovation capacity construct, Szeto’s [49] definition of innovation capacity was used as “a continuous improvement of the overall capability of firms to generate innovation.” Innovations may be manifested in different forms comprised of product, process, and organizational and marketing innovations [3]. This study focuses on product and process innovations. Moreover, the degree of novelty can vary ranging from the incremental improvements in existing products and processes to radical changes in the form of entirely new products and processes. This research included both improvements in the existing products and processes and generating entirely new ones. These four types of innovations are very close to the definition provided for the technological product and process innovation (TPP) in the 2nd edition of Oslo Manual [42]. The exception is that the manual excludes minor improvements, while we included those in the domain of this study. These are the dominant forms of innovations in the low and medium technological industries as well as in the SMEs in developing countries. It means that the scale developed in this study excludes organizational and marketing innovations. Based on the definition, the innovation capacity is concerned with the continuous improvement of innovation capabilities. The continuity of improvement is assumed to be gained through the continuous innovation efforts or engaging with innovation practices in a company. Thus, innovation practices are considered as the items or measures for evaluating innovation capacity for a firm.

3.2. Item Generation

Literature review and expert survey were used to determine the innovation activities that potentially may improve the innovation capabilities. A list of innovation practices was extracted. The extracted items were investigated by a multidisciplinary consultant team and subject matter experts, involving the academics and practitioners in innovation as well as the SMEs. The context of measurement, the comprehensiveness and appropriateness of the items, and possible overlap items were examined. Consequently, some of the primary practices were eliminated, a number of new practices were suggested, and a few of them were combined.

A draft questionnaire was designed and the respondents were asked to specify the extent to which the practices are institutionalized in their company. A pilot test was performed in 10 industrial SMEs to improve the wording, sequence, appropriateness, and clarity. Finally, 24 items were developed as the measurement of innovation capacity construct, known as Innovation Practices (IPs).

3.3. Sampling Design

The subcontractor SMEs of Iranian automaker companies were selected as the target population of this research. According to the UNIDO [50], industrial SMEs have high potential to grow in Iran; activating this potential depends on improving the competitiveness capacity in these firms. Automotive industry is the second largest industry in Iran, following the oil and gas. It is the only industry in Iran, in which the backward linkage between large-scale companies and SMEs has formed very well [50]. In addition, all active firms in this industry should have the ISO-9000 and ISO-TS certificates as the minimum requirement. These certificates have formed some basic systems and procedures, which have facilitated the information generation, documentation, and accessibility. Thus, it was assumed that the respondents’ tendency to participate in this survey as well as the quality of responses will be higher compared to the SMEs of other industries. A randomly selected list of 400 industrial SMEs in the automotive industry was prepared. To improve the response rate, the agreement of managers and/or expert members of involved companies was sought at the 7th International Auto Part Exhibition (29th of November–2nd of December 2012) to participate in this survey.

3.4. Data Collection

A research questionnaire was organized to collect data. The questionnaire involved 3 groups of questions about (1) the demographic information of respondents and their firms, (2) the innovation practices, and (3) the state of innovation-related competitive performance of the firm. It is recommended that the respondents should be selected from individuals that are well matched to the context. In this research, the potential respondents were recognized as the managing directors, R&D managers, engineering managers, and well-informed experts based on the firm’s decision. All the potential respondents were familiar with the topic of this study and were involved in the innovation process of their company. The questionnaire was distributed among the sample firms through an email, which contained the questionnaire as a web-based link as used by Forsman [51]. A reminder email was sent to those who did not respond 2 weeks after the first email. A total of 181 responses were received, among which 6 were ignored due to the significant missing data. Total of 175 questionnaires with 43.75% response rate were used for data analysis. The nonresponse bias was assessed by comparing means in the last quartile of responses and other three quartiles as suggested by Armstrong and Overton [52]. Results of the -test (, sig. = 0.353) verified the absence of significant nonresponse bias in the data.

4. Data Analysis

4.1. Descriptive Analysis

Table 1 shows summarized demographic information of the respondents and their firms. Based on the annual income, all the firms had more than 75% relation to the auto-part industry and had obtained ISO-9000 and ISO-TS certificates as perquisites of entry and extension of operation in the industry. Based on the OECD [3] definition, small- and medium-sized firms are those with 10–49 and 50–249 employees, respectively. In total, 84% of the firms participating in this study were medium-sized companies and the remaining were small sized. More than 85% of the respondents had 5–10-year experience; about 50% were well-informed related experts, and 50% were managing directors and R&D managers. All the respondents had academic education.

tab1
Table 1: Statistics of the respondents and firms.
4.2. Purifying the Generated Measures

Item purification is aimed at examining the extent to which the selected items actually belong to the domain of the innovation capacity construct. According to Churchill [48], if all items in a scale are derived from the domain of a single construct, the responses to those items should be highly intercorrelated. Conversely, low interitem correlations indicate that some items are not extracted from appropriate domain, thus producing errors and unreliability. As suggested by Churchill [48], a simple method to improve the interitem correlations is to ignore items with low item-total correlation. When calculated using SPSS18, the item-total correlations were near zero for some items; thus, these items should be purified. Through an iterative process, items with correlation near zero or items that produced a substantial or sudden drop in the item-total correlation were deleted. The process was stopped when a set of items with improved homogenous correlated item-total correlation were obtained. In this stage, 5 items were dropped due to their low contribution to the total correlation. Purifying these 5 items resulted in 19 items with more improved item-to-total correlations, minimum value of 0.454, and better homogeneity across item-total correlations of the remained items. The final innovation practices (IPs) and some of their selected citations are illustrated in the appendix.

4.3. Exploratory Factor Analysis

Due to the high number of remained variables to measure the innovation capacity (19 items), factor analysis was used to understand the structure and dimensionality as well as to summarize and reduce the variables. Factor analysis primarily had an exploratory purpose due to the insufficient empirical evidences on the dimensions and characteristics of innovation capabilities in the industrial SMEs in the literature.

To examine the appropriateness of the data for applying exploratory factor analysis (EFA), the sampling adequacy was evaluated by the Kaser-Mayer-Olkin (KMO) test. The value of 0.888 for KMO was greater than the recommended level of 0.50 [53], which indicates that applying the EFA technique can be useful in grouping the IPs as a factor solution. Bartllet’s test of sphericity (Approx. Chi-square = 3179.296, df = 171, sig. = 000) indicated the significance of interitem correlations. The significant interitem correlation denoted the possibility to explore a new factorial structure for original variables. In addition, examining the communality of variables revealed that all the variables were suitable to be involved in the EFA process as the communality was greater than 0.5 for all. The principal component extraction method was used to extract the variables’ underlying factors. This method searches for the value of total communality, which is the closest to the total observed variances [54]. The EFA was conducted considering the eigenvalues greater than one, factor loadings greater than 0.45, and varimax as rotation method to identify the number of extracted factors. The results of the EFA showed that these 19 items were significantly loaded on 4 different factors, which was confirmed by the scree test. The percentages of explained variance of the 19 original variables were 21.821%, 18.094%, 20.464%, and 21.120% by F1, F2, F3, and F4, respectively.

The results show that a total of 81.498% variance of the 19 original variables was explained by the 4 extracted factors. Moreover, the minimum factor loading was 0.785, which is more than the minimum 0.45 [53]. These results, thus, lead us to accept the 4 extracted factors as the dimensions of innovation capacity construct (see Table 2). Finally, based on the interpretation of involved items in each factor and considering the factor loadings, the following names were deemed suitable for each of the 4 factors as dimensions of innovation capacity:F1:Dimension 1: knowledge and technology management capability (KTM);F2:Dimension 2: idea management capability (IDM);F3:Dimension 3: project development capability (PDV);F4:Commercialization capability (COM).

tab2
Table 2: The varimax rotated matrix.

Based on the above mentioned results of EFA, Innovation capacity (INVCAP) construct can be considered as a second-order latent factor measured by the four dimensions (Figure 2). Each dimension is in turn, measured as a first-order latent factor by its related observed indicators.

576596.fig.002
Figure 2: Innovation capacity as a second-order factor model.
4.4. Confirmatory Factor Analysis

Confirmatory factor analysis (CFA) was used to assess quality-related criteria of the second-order model developed for measuring innovation capacity. CFA involves evaluation of an a priori measurement model, where the observed variables are mapped onto the latent construct according to theory. We did not have an a priori measurement model due to the insufficient empirical evidences on the measures of the SME’s innovation capacity in the literature. However, we used CFA to verify different quality criteria of the developed model, since the measures were selected on the basis of prior conceptual and empirical studies [55]. Moreover, CFA is a powerful technique to assess quality of a measurement instrument, by providing quality criteria, which are not provided by EFA (e.g., the overall model fit indices and composite reliability). The results of CFA are presented and discussed in subsequent section.

5. Discussion of Findings

The objective of this research was to develop a model for measuring the innovation capacity construct, which is deemed crucial for further development of innovation research. The finding revealed that innovation capacity could be considered as a four-dimensional construct (INVCAP), including the knowledge and technology management, idea management, project development, and commercialization capabilities. Moreover, a second-order factor model was suggested for measuring the four-dimensional model of innovation capacity. In this section, the objective is to systematically evaluate the measurement properties of the second-order measurement model. Achieving the objective requires testing the model in terms of the key component of quality-content validity, reliability, unidimensionality and convergent validity, and discriminant and nomological validity.

5.1. Content Validity

We expected an acceptable level of content validity in the developed measurement scale as a result of using a logical process of scale development. The items were initially selected through an extensive related literature review, which improved the comprehensiveness and relevance of the items. The items were screened and validated by the subject matter experts, comprising the academics and practitioners. The context of measurement is innovation and SME sector. The experts examined the items in terms of appropriateness, applicability, overlap, and ambiguity. Finally, a draft questionnaire was designed, and a pilot test was performed on 10 managers or experts of the industrial SMEs as potential respondents to improve the wording, sequence, appropriateness, and clarity of the final version.

5.2. Reliability

Reliability is defined as the extent to which a set of variables is consistent with what it is intended to measure [53]. To assess the reliability of measurement models, coefficient alpha [56], item-total correlations [57], indicator reliability [58], and composite reliability [59] measures are used. Table 3 provides the various reliability estimates of the individual indicators and their respective latent variables (i.e., INVCAP dimensions).

tab3
Table 3: Assessment of reliability of measurement model.

Coefficient alpha and composite reliability represent the internal consistency of the set of items for each dimension. As presented in Table 3, values of coefficient alphas range from 0.924 to 0.940, exceeding the commonly accepted level of 0.7 [57]. The results thus demonstrate that, for each INVCAP dimension, the various indicators that constitute the scale are strongly correlated with one another. In addition, values of composite reliability estimates range from 0.918 to 0.939, exceeding the threshold level of 0.7 [60]. That is, for each latent variable, the trait variance accounts for more than 70% of the overall measure variance.

Table 3 also shows the item-total correlation and indicator reliability for each of the indicators. Values of indicator reliability range from 0.541 to 0.915 and all are greater than the threshold level of 0.5 [60]. Similarly, all item-total correlations are greater than 0.5, indicating that each indicator correlates highly with the rest of the indicators purported to measure the same latent variable [57]. Taken together, the results in Table 3 provide empirical evidence for the reliability of the measures.

5.3. Unidimensionality and Convergent Validity

Unidimensionality refers to the existence of a single trait underlying a set of measures. Unidimensionality is achieved when the measuring items have acceptable factor loadings for the respective latent construct. Convergent validity is consistency in measurement across operationalization. Convergent validity is achieved when all items in a measurement model are statistically significant. Unidimensionality and convergent validity can be assessed simultaneously with a confirmatory factor analysis model [61, 62].

In this study, the unidimensionality and convergent validity of the four first-order models were tested based on the results of exploratory factor analysis (EFA). The EFA results (Table 2) demonstrated that all the 19 original variables were loaded on 4 dimensions, and all the loadings had high values between 0.785 and 0.944. The values are much higher than the recommended value of 0.45 [53]. Additionally, the entire loadings were significant at value = 0.001. In addition, confirmatory factor analysis (CFA) was used to assess unidimensionality, convergent validity, and overall model fit of the second-order model depicted in Figure 1. The result of CFA is summarized in Table 4.

tab4
Table 4: Assessment of unidimensionality and convergent validity of the second-order model.

When judged by the Chi-square statistic alone, the four-dimensional second-order model does not provide a good fit to the data. Although the normed is within an acceptable range (/df < 2) [58], NFI (0.921) and CFI (0.963) are greater than 0.9 and GFI fell short of the threshold level of 0.9 (GFI = 0.864).

As can be seen in Table 4, β value is 0.366 for one of the dimensions (KTM), which indicates relatively weak correlation between KTM and INVCAP construct. However, other β values are greater than threshold 0.45 and also the entire second-order factor loadings (β) are statistically significant with associated -values at . In addition, the residual standard matrix indicates no cells of residuals over 2.58, the threshold level suggested by Jöreskog and Sörbom [63]. Taken together, all of the key statistics provide a consistent support for the unidimensionality and convergent validity of the four-dimensional second-order model for measuring innovation capacity construct.

5.4. Discriminant Validity

Discriminant validity refers to the extent to which measures of a latent variable (i.e., each INVCAP dimension) are unique and thus differ from measures of other constructs. In this study, three different procedures are adopted to assess discriminant validity: the Chi-square different test, confidence interval test, and variance extracted test.

5.4.1. Chi-Square Difference Test

With the Chi-square different test [64, 65], assessment of discriminant validity is achieved by completing a series of pairwise tests comparing the two unconstrained (model 1) and constrained (model 2) models. The objective of these tests is to examine whether correlation between any two dimensions is significantly different from unity. In model 1, the covariance between two dimensions is unconstrained. That is, the dimensions are allowed to covary. In model 2, the covariance between two dimensions is constrained to one. A significant lower Chi-square statistic for model 1, when compared with model 2, provides support for discriminant validity.

Table 5 summarizes the results of six pairwise comparisons of four INVCAP dimensions. Estimates of between-dimension correlations (φ) range from 0.197 to 0.567, providing preliminary evidence that the dimensions are not perfectly correlated with each other. As can be seen from Table 5, the values of difference, each with one degree of freedom, range from 29.366 to 47.386. Given that the critical value of the Chi-square is 10.827 at , all six Chi-square differences are statistically significant. That is, each of the unconstrained models provides a significant better fit than its corresponding constrained model. The Chi-square tests, thus, support the discriminant validity among the dimensions.

tab5
Table 5: Chi-square difference test for assessment of discriminant validity.
5.4.2. Confidence Interval Test

In addition to the Chi-square difference test, a confidence interval test can be used to assess discriminant validity [64, 65]. This test involves establishing a confidence interval of two standard errors around the correlation between a pair of latent variables of interest. Evidence for discriminant validity is provided when the interval does not include 1.0.

Table 5 presents the correlation estimates (φ) for six pairs of dimensions and the standard error for each estimate. The confidence interval for KTM and IDM dimensions, for instance, can be calculated as [, ] = [0.101, 0.417]. As shown in the table, the confidence intervals were computed for each pair of variables. None of the intervals include the value 1.0, suggesting that it is unlikely that the true correlation between the dimensions is 1.0. Accordingly, the results of the confidence interval tests are also supportive of discriminant validity.

5.4.3. Average Variance Extracted Test

With the average variance extracted test, estimates of the average variance extracted are first computed for two dimensions of interest [66]. The estimates are then compared to the square of the correlation between the two dimensions. If both average variance extracted estimates are greater than the squared correlation, the test provides evidence for discriminant validity.

The average variance extracted estimates are 0.779, 0.799, 0.710, and 0.768 for the knowledge and technology management, idea management, and project development and commercialization dimensions, respectively. These estimates are shown as on-the-diagonal elements in Table 6. Squares of correlations (2) between four dimensions are also summarized as off-diagonal elements in the table. An examination of Table 6 indicates that, in all of six cases, variance extracted estimates are greater than the square of correlation. As all of the cases meet the requirement of the variance-extracted test, it can be concluded that the results provide evidence for discriminant validity.

tab6
Table 6: Variance extracted tests for assessment of discriminant validity.

Taken together, the three tests performed thus far are supportive of the discriminant validity for four INVCAP’s dimensions.

5.5. Nomological Validity

To test the nomological validity of the developed scale, it was hypothesized that the innovation capacity of a firm is one of determinants of competitive performance as noted throughout the literature [6769]. To test the hypothesis, the competitive performance of a firm was measured through the product and process competitive performances. The product competitive performance was evaluated through 3 items, including the constant or increasing market share for the existing products, gaining a share of market for new requested products, and extending the variety of customers. The process competitive performance was assessed through 5 items, including the reduction of unit cost of manufacturing, improvement of the existing products’ quality, the flexibility in production capacity, the increasing flexibility of production process, and finally reduction of time to respond to the customer needs [70]. All the items were measured by the five-point interval scales ranging from 1 to 5. The respondents were asked to specify the extent to which they were competitive in the performance item in comparison to their main competitors. The score for each performance (product and process) was calculated by the summation of its item’s scores. Two simple regression models were used to examine the hypothesized positive relation between the innovation capacity construct as the independent variable and the product and process performance as the dependent variables. The findings of statistical analysis are summarized in Table 7. As can be seen from Table 7, both regression models are statistically significant at . In both models, the coefficient β of the innovation capacity (INVCAP) as the independent variable was positive with values of 0.569 and 0.577 and was statistically significant in the model based on the -values 9.131 and 8.088. The values of the two models indicate that variation in the independent variable accounts for 32.3% and 33.3% of the variation in the product and process competitive performance as dependent variables, respectively. The above-mentioned results verify the nomological validity of the developed scale.

tab7
Table 7: Assessment of nomological validity.

6. Conclusion

The main contribution of the study is the development of a new scale of measurement for innovation capacity construct, which is deemed important for further development of innovation research. This contribution is crucial as there is lack of a psychometrically valid scale for evaluation of the innovation capacity. The study findings suggested a 4-dimensional measuring model to evaluate innovation capacity which includes knowledge and technology management, project development, idea management, and commercialization capabilities with high reliability and validity.

The findings are consistent with Christensen and Raynor’s [71] study which shows that successful innovation is driven by two nearly orthogonal dimensions: discovery (ideation) and delivery (implementation). KTM and IDM relate to discovery in general, while PDV and COM relate to delivery. To test the model’s reliability, unidimensionality, and discriminant and convergent validities, we performed a series of analyses using the data collected from 175 subcontractor SMEs in Iran’s automobile industry. The reliability measures as well as unidimensionality and convergent and discriminant validity tests strongly support the proposed scale of measurement. In addition, the nomological validity was verified, suggesting its predictive validity.

Two major implications of this study are as follows. In the academic perspective, the study findings indicate that the innovation capacity of a firm can be measured using its innovation activities and efforts instead of innovation inputs or outputs. This approach to innovation measurement considers those activities and efforts that have not resulted in the innovation output yet. This type of innovative activities is the dominant way to incremental products and process innovation in the SMEs of developing countries.

In the perspective of practitioners, the developed scale can help firms in measuring and managing innovations as an important competitive weapon. The model suggests a practical way to measure organization’s innovation capacity. A key managerial aspect of this scale is its focus on the activities that a firm needs to become innovative. Besides the firm managers, the external policy and decision makers can use this scale as a comparative measure. For example, large-scale companies as the customer of industrial SMEs (e.g., about 2000 subcontractors in Iran’s automobile industry) can use this scale to discriminate and categorize the subcontractors for substantial decisions such as subcontractor selection for new product development (NPD) or assigning market share to a certain subcontractor. In addition to this, the policy maker and developmental organizations may employ the scale to determine their priorities for financial or training support.

Some limitations of this study should be acknowledged, which can open up opportunities for future research. First, all the measures including the measures of nomological validity are self-assessed and subjective. None are market-based or based on objective financial reports, for instance. The reason is that such data are usually not available from the SMEs. Second, the scale for measuring the innovation capacity was developed and tested within the industrial SMEs of automobile industry. Thus, more studies should be conducted in other industries such as the electronic industry. Third, the single informant bias could be a concern as only one of the general managers, R&D managers, and engineering managers or a well-informed expert completed the questionnaire. Future research should attempt to address such concerns by asking two or more of the above-mentioned informants to complete the questionnaire. Finally, this study was conducted in the economic and industrial environment of Iran, which may be different from other developed or developing countries. Replication of this research in other countries can help to check and to validate the applicability of findings of this study in other parts of the world.

In addition, the scale developed in this study leads to further research opportunities in the field. One research opportunity is to examine the factors that could drive or hinder the innovation capacity of industrial SMEs. Sudhir Kumar and Bala Subrahmanya [11] investigated the influences of large-scale and the transnational corporations (TNCs) customers’ contribution to the innovation capability of the industrial SMEs in India. Yet, other determinants of innovation capacity of the industrial SMEs should be identified and investigated in other countries. The second opportunity is to explore the performance of industrial SMEs resulting from their innovation capacity. A third opportunity is to design such research for large-scale companies where the financial reports and industry data are available. Using this data make it possible to compare the innovation capacity between SMEs and large-scale companies. The suggested areas for further study can potentially expand our knowledge in the field of innovation in the industrial SMEs.

Appendix

See Table 8.

tab8
Table 8

Conflict of Interests

The authors declare that there is no conflict of interests regarding the publication of this paper.

References

  1. M. Pitt, S. Goyal, and M. Sapri, “Innovation in facilities maintenance management,” Building Services Engineering Research and Technology, vol. 27, no. 2, pp. 153–164, 2006. View at Publisher · View at Google Scholar · View at Scopus
  2. H. B. Rejeb, L. Morel-Guimarães, V. Boly, and N. G. Assiélou, “Measuring innovation best practices: improvement of an innovation index integrating threshold and synergy effects,” Technovation, vol. 28, no. 12, pp. 838–854, 2008. View at Publisher · View at Google Scholar · View at Scopus
  3. OECD, Oslo Manual: Guidance for Collecting Innovation Data, 3rd edition, 2005.
  4. H. Sun, S. Y. Wong, Y. Zhao, and R. Yam, “A systematic model for assessing innovation competence of Hong Kong/China manufacturing companies: a case study,” Journal of Engineering and Technology Management, vol. 29, no. 4, pp. 546–565, 2012. View at Publisher · View at Google Scholar · View at Scopus
  5. J. R. Cooper, “A multidimensional approach to the adoption of innovation,” Management Decision, vol. 36, pp. 493–502, 1998. View at Publisher · View at Google Scholar
  6. A. Frenkel, S. Maital, and H. Grupp, “Measuring dynamic technical change: a technometric approach,” International Journal of Technology Management, vol. 20, no. 3, pp. 429–441, 2000. View at Publisher · View at Google Scholar · View at Scopus
  7. R. Cordero, “The measurement of innovation performance in the firm: an overview,” Research Policy, vol. 19, no. 2, pp. 185–192, 1990. View at Publisher · View at Google Scholar · View at Scopus
  8. R. Adams, J. Bessant, and R. Phelps, “Innovation management measurement: a review,” International Journal of Management Reviews, vol. 8, no. 1, pp. 21–47, 2006. View at Publisher · View at Google Scholar · View at Scopus
  9. N. Becheikh, R. Landry, and N. Amara, “Lessons from innovation empirical studies in the manufacturing sector: a systematic review of the literature from 1993-2003,” Technovation, vol. 26, no. 5-6, pp. 644–664, 2006. View at Publisher · View at Google Scholar · View at Scopus
  10. H. Romijn and M. Albaladejo, “Determinants of innovation capability in small electronics and software firms in southeast England,” Research Policy, vol. 31, no. 7, pp. 1053–1067, 2002. View at Publisher · View at Google Scholar · View at Scopus
  11. R. Sudhir Kumar and M. H. Bala Subrahmanya, “Influence of subcontracting on innovation and economic performance of SMEs in Indian automobile industry,” Technovation, vol. 30, no. 11-12, pp. 558–569, 2010. View at Publisher · View at Google Scholar · View at Scopus
  12. V. Souitaris, “External communication determinants of innovation in the context of a newly industrialized country: a comparison of objective and perceptual results from Greece,” Technovation, vol. 21, no. 1, pp. 25–34, 2001. View at Publisher · View at Google Scholar · View at Scopus
  13. A. Uzun, “Technological innovation activities in Turkey: the case of manufacturing industry, 1995–1997,” Technovation, vol. 21, no. 3, pp. 189–196, 2001. View at Publisher · View at Google Scholar · View at Scopus
  14. R. Landry, N. Amara, and M. Lamari, “Does social capital determine innovation? To what extent,” Technological Forecasting and Social Change, vol. 69, no. 7, pp. 681–701, 2002. View at Publisher · View at Google Scholar · View at Scopus
  15. S. A. Zahra and G. George, “Absorptive capacity: a review, reconceptualization, and extension,” Academy of Management Review, vol. 27, no. 2, pp. 185–203, 2002. View at Google Scholar · View at Scopus
  16. R. J. Calantone, S. T. Cavusgil, and Y. Zhao, “Learning orientation, firm innovation capability, and firm performance,” Industrial Marketing Management, vol. 31, no. 6, pp. 515–524, 2002. View at Publisher · View at Google Scholar · View at Scopus
  17. I. Nonaka, R. Toyama, and N. Konno, “SECI, Ba and leadership: a unified model of dynamic knowledge creation,” Long Range Planning, vol. 33, no. 1, pp. 5–34, 2000. View at Publisher · View at Google Scholar · View at Scopus
  18. N. O'Regan, A. Ghobadian, and M. Sims, “Fast tracking innovation in manufacturing SMEs,” Technovation, vol. 26, no. 2, pp. 251–261, 2006. View at Publisher · View at Google Scholar · View at Scopus
  19. K. Cormican and D. O'Sullivan, “Auditing best practice for effective product innovation management,” Technovation, vol. 24, no. 10, pp. 819–829, 2004. View at Publisher · View at Google Scholar · View at Scopus
  20. T. Koc, “Organizational determinants of innovation capacity in software companies,” Computers and Industrial Engineering, vol. 53, no. 3, pp. 373–385, 2007. View at Publisher · View at Google Scholar · View at Scopus
  21. R. C. M. Yam, J. C. Guan, K. F. Pun, and E. P. Y. Tang, “An audit of technological innovation capabilities in Chinese firms: some empirical findings in Beijing, China,” Research Policy, vol. 33, no. 8, pp. 1123–1140, 2004. View at Publisher · View at Google Scholar · View at Scopus
  22. A. Sánchez, A. Lago, X. Ferràs, and J. Ribera, “Innovation management practices, strategic adaptation, and business results: evidence from the electronics industry,” Journal of Technology Management and Innovation, vol. 6, no. 2, pp. 14–39, 2011. View at Google Scholar · View at Scopus
  23. U. Cantner, E. Conti, and A. Meder, “Networks and innovation: the role of social assets in explaining firms' innovative capacity,” European Planning Studies, vol. 18, no. 12, pp. 1937–1956, 2010. View at Publisher · View at Google Scholar · View at Scopus
  24. A. Verhaeghe and R. Kfir, “Managing innovation in a knowledge intensive technology organisation (KITO),” R&D Management, vol. 32, no. 5, pp. 409–417, 2002. View at Publisher · View at Google Scholar · View at Scopus
  25. G. J. Avlonitis, P. G. Papastathopoulou, and S. P. Gounaris, “An empirically-based typology of product innovativeness for new financial services: success and failure scenarios,” Journal of Product Innovation Management, vol. 18, no. 5, pp. 324–342, 2001. View at Publisher · View at Google Scholar · View at Scopus
  26. K. Atuahene-Gima, “An exploratory analysis of the input of market orientation on new product performance: a contingency approach,” Journal of Product Innovation Management, vol. 12, no. 4, pp. 275–293, 1995. View at Publisher · View at Google Scholar
  27. A. Griffin and A. L. Page, “An interim report on measuring product development success and failure,” The Journal of Product Innovation Management, vol. 10, no. 4, pp. 291–308, 1993. View at Publisher · View at Google Scholar · View at Scopus
  28. M. Von Zedtwitz, “Organizational learning through post-project reviews in R & D,” R and D Management, vol. 32, no. 3, pp. 255–268, 2002. View at Google Scholar · View at Scopus
  29. M. Schoeman, D. Baxter, K. Goffin, and P. Micheli, “Commercialization partnerships as an enabler of UK public sector innovation: the perfect match?” Public Money & Management, vol. 32, no. 6, pp. 425–432, 2012. View at Publisher · View at Google Scholar · View at Scopus
  30. J. P. François, F. Favre, and S. Negassi, “Competence and organization: two drivers of innovation. A micro-econometric study,” Economics of Innovation and New Technology, vol. 11, no. 3, pp. 249–270, 2002. View at Google Scholar
  31. B. A. Lukas and O. C. Ferrell, “The effect of market orientation on product innovation,” Journal of the Academy of Marketing Science, vol. 28, no. 2, pp. 239–247, 2000. View at Publisher · View at Google Scholar · View at Scopus
  32. J. L. Furman, M. E. Porter, and S. Stern, “The determinants of national innovative capacity,” Research Policy, vol. 31, no. 6, pp. 899–933, 2002. View at Publisher · View at Google Scholar · View at Scopus
  33. L. Bull and I. Ferguson, “Factors influencing the success of wood product innovations in Australia and New Zealand,” Forest Policy and Economics, vol. 8, no. 7, pp. 742–750, 2006. View at Publisher · View at Google Scholar · View at Scopus
  34. K. N. Kang and H. Park, “Influence of government R&D support and inter-firm collaborations on innovation in Korean biotechnology SMEs,” Technovation, vol. 32, no. 1, pp. 68–78, 2012. View at Publisher · View at Google Scholar · View at Scopus
  35. C. S. Marques and J. Ferreira, “SME innovative capacity, competitive advantage and performance in a “traditional” industrial region of Portugal,” Journal of Technology Management and Innovation, vol. 4, no. 4, pp. 54–68, 2009. View at Google Scholar · View at Scopus
  36. L. Tang and R. Chi, “The evaluation criteria on ICT enterprises innovation capability-evidence from zhejiang province,” Journal of Convergence Information Technology, vol. 6, no. 1, pp. 69–78, 2011. View at Publisher · View at Google Scholar · View at Scopus
  37. M. L. Flor and M. J. Oltra, “Identification of innovating firms through technological innovation indicators: an application to the Spanish ceramic tile industry,” Research Policy, vol. 33, no. 2, pp. 323–336, 2004. View at Publisher · View at Google Scholar · View at Scopus
  38. A. Kleinknecht, K. van Montfort, and E. Brouwer, “The non-trivial choice between innovation indicators,” Economics of Innovation and New Technology, vol. 11, pp. 109–121, 2002. View at Publisher · View at Google Scholar
  39. J. Michie, “The Internationalisation of the innovation process,” International Journal of the Economics of Business, vol. 5, pp. 261–277, 1998. View at Publisher · View at Google Scholar
  40. Z. Acs and D. Audretsch, Innovation and Small Firms, MIT Press, Cambridge, Mass, USA, 1991.
  41. R. Coombs, P. Narandren, and A. Richards, “A literature-based innovation output indicator,” Research Policy, vol. 25, no. 3, pp. 403–413, 1996. View at Publisher · View at Google Scholar · View at Scopus
  42. OECD, The Oslo Manual: Proposed Guidelines for Collecting and Interpreting Technological Innovation Data, 2nd edition, 1997.
  43. D. Archibugi and M. Pianta, “Measuring technological change through patents and innovation surveys,” Technovation, vol. 16, no. 9, pp. 451–468, 1996. View at Publisher · View at Google Scholar · View at Scopus
  44. E. Mansfield, “How rapidly does new industrial technology leak out?” The Journal of Industrial Economics, vol. 34, no. 2, pp. 217–223, 1985. View at Publisher · View at Google Scholar
  45. B. S. Tether, “Small and large firms: sources of unequal innovations?” Research Policy, vol. 27, no. 7, pp. 725–745, 1998. View at Publisher · View at Google Scholar · View at Scopus
  46. D. Archibugi and G. Sirilli, “The direct measurement of technological innovation in business,” in Survey Methodology and Measurement, 2000. View at Google Scholar
  47. N. Amara, R. Landry, N. Becheikh, and M. Ouimet, “What factors drive radical innovations in traditional manufacturing industries?” in Proceedings of the DRUID Summer Conference, Copenhagen, Denmark, 2004.
  48. G. A. Churchill, “A paradigm for developing better measures of marketing constructs,” Journal of Marketing Research, vol. 16, no. 1, pp. 64–73, 1979. View at Publisher · View at Google Scholar
  49. E. Szeto, “Innovation capacity: working towards a mechanism for improving innovation within an inter-organizational network,” TQM Magazine, vol. 12, no. 2, pp. 149–157, 2000. View at Publisher · View at Google Scholar · View at Scopus
  50. UNIDO, Strategy Document to Enhance the Contribution of an Efficient and Effective Small and Medium Sized Enterprises Sector Toindustrial and Economic Development in the Islamic Republoc of Iran, The United Nations Industrial Development Organization, Vienna, Austria, 2003.
  51. H. Forsman, “Innovation capacity and innovation development in small enterprises. A comparison between the manufacturing and service sectors,” Research Policy, vol. 40, no. 5, pp. 739–750, 2011. View at Publisher · View at Google Scholar · View at Scopus
  52. J. S. Armstrong and T. S. Overton, “Estimating non-response bias in mail surveys,” Journal of Marketing Research, vol. 14, no. 3, pp. 396–402, 1977. View at Publisher · View at Google Scholar
  53. J. Hair, W. Black, B. Babin, and R. Anderson, Multivariate Data Analysis, Prentice Hall, 7th edition, 2010.
  54. A. C. Rencher, Methods of Multivariate Analysis, John Wiley & Sons, Toronto, Canada, 2nd edition, 2002.
  55. R. Shah and S. M. Goldstein, “Use of structural equation modeling in operations management research: looking back and forward,” Journal of Operations Management, vol. 24, no. 2, pp. 148–169, 2006. View at Publisher · View at Google Scholar · View at Scopus
  56. L. J. Cronbach, “Coefficient alpha and the internal structure of tests,” Psychometrika, vol. 16, no. 3, pp. 297–334, 1951. View at Publisher · View at Google Scholar · View at Scopus
  57. J. C. Nunnally, Psychometric Theory, McGraw-Hill, New York, NY, USA, 2nd edition, 1978.
  58. L. Hatcher, A Step-by-Step Approach to Using the SAS System for Factor Analysis and Structural Equation Modeling, SAS Institute Inc., 1994.
  59. C. E. Werts, R. L. Linn, and K. G. Joreskog, “Intra class reliability estimates: testing structural assumptions,” Educational and Psychological Measurement, vol. 34, pp. 25–33, 1974. View at Google Scholar
  60. R. P. Bagozzi and Y. Yi, “On the evaluation of structural equation models,” Journal of the Academy of Marketing Science, vol. 16, no. 1, pp. 74–94, 1988. View at Publisher · View at Google Scholar · View at Scopus
  61. R. P. Bagozzi, “An examination of the validity of two models of attitude,” Multivariate Behavioral Research, vol. 16, no. 3, pp. 323–359, 1981. View at Publisher · View at Google Scholar
  62. D. C. Bello and R. Lohtia, “Export channel design: the use of foreign distributors and agents,” Journal of the Academy of Marketing Science, vol. 23, no. 2, pp. 83–93, 1995. View at Publisher · View at Google Scholar · View at Scopus
  63. K. G. Jöreskog and D. Sörbom, Lisrel 7 User's Reference Guide, Scientific Software, 1989.
  64. J. C. Anderson, “An approach for confirmatory measurement and structural equeation modeling of organizational properties,” Management Science, vol. 33, no. 4, pp. 525–541, 1987. View at Publisher · View at Google Scholar
  65. J. C. Anderson and D. W. Gerbing, “Structural equation modeling in practice: a review and recommended two-step approach,” Psychological Bulletin, vol. 103, no. 3, pp. 411–423, 1988. View at Publisher · View at Google Scholar · View at Scopus
  66. C. Fornell and D. Larcker, “Evaluating structural equation models with unobservable variables and measurement error,” Journal of Marketing Research, vol. 56, pp. 39–50, 1981. View at Google Scholar
  67. R. Evangelista and A. Vezzani, “The economic impact of technological and organizational innovations. A firm-level analysis,” Research Policy, vol. 39, no. 10, pp. 1253–1263, 2010. View at Publisher · View at Google Scholar · View at Scopus
  68. R. Rohrbeck and H. G. Gemünden, “Corporate foresight: its three roles in enhancing the innovation capacity of a firm,” Technological Forecasting and Social Change, vol. 78, no. 2, pp. 231–243, 2011. View at Publisher · View at Google Scholar · View at Scopus
  69. M. Saunila and J. Ukko, “A conceptual framework for the measurement of innovation capability and its effects,” Baltic Journal of Management, vol. 7, no. 4, pp. 355–375, 2012. View at Publisher · View at Google Scholar · View at Scopus
  70. X. Peng, Improvement and Innovation Capabilities in Manufacturing: Linking Practice Bundles to Strategic Goals and Supplier Collaboration, Faculty of Graduate School, Minneapolis, Minn, USA, 2007.
  71. C. M. Christensen and M. E. Raynor, The Innovator's Solution: Creating and Sustaining Successful Growth, Harvard Business Press, 2003.