About this Journal Submit a Manuscript Table of Contents
Advances in Human-Computer Interaction
Volume 2012 (2012), Article ID 948693, 11 pages
http://dx.doi.org/10.1155/2012/948693
Research Article

The Role of Usability in Business-to-Business E-Commerce Systems: Predictors and Its Impact on User's Strain and Commercial Transactions

1Institute of Psychology, University of Kiel, 24 098 Kiel, Germany
2Institute of Psychology, University of Trier, 54286 Tier, Germany

Received 13 March 2012; Revised 13 July 2012; Accepted 18 July 2012

Academic Editor: Kiyoshi Kiyokawa

Copyright © 2012 Udo Konradt et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Abstract

This study examines the impact of organizational antecedences (i.e., organizational support and information policy) and technical antecedences (i.e., subjective server response time and objective server response time) to perceived usability, perceived strain, and commercial transactions (i.e. purchases) in business-to-business (B2B) e-commerce. Data were gathered from a web-based study with 491 employees using e-procurement bookseller portals. Structural equation modeling results revealed positive relationships of organizational support and information policy, and negative relationships of subjective server response time to usability after controlling for users' age, gender, and computer experience. Perceived usability held negative relationships to perceived strain and fully mediated the relation between the three significant antecedences and perceived strain while purchases were not predicted. Results are discussed in terms of theoretical implications and consequences for successfully designing and implementing B2B e-commerce information systems.

1. Introduction

Business-to-business (B2B) e-commerce information systems which make use of the Internet and web technologies for interorganizational business transactions are widely used. Various scholars have discussed the importance of success factors in acceptance and adoption of B2B applications and supply chain management (e.g., [1]), and research has examined potential success factors. Results suggest that information quality, system quality, and trust among supplier and customer are potential critical success factors which facilitate e-commerce systems for B2B buying and selling (e.g., [14]). Notwithstanding the significance, very little research systematically examined the impact of usability of B2B e-commerce information systems to users’ strain, system adoption, and transactions. Usability refers to the “extent to which a product can be used by specified users to achieve specified goals with effectiveness, efficiency, and satisfaction in a specified context of use” [5]. Related research in web-based applications and in business-to-customer (B2C) e-commerce substantiates that usability of information systems predicts transaction intentions and consumer behavior (e.g., [6, 7]).

The examination of usability and strain is important in the B2B sector for several reasons. Foremost, human computer interaction research has demonstrated that high usability reduces users’ strain and supports employees’ health behavior in a positive way [8, 9]. This is particular relevant in the B2B sectors where, compared to B2C systems, lower budgets are spent for designing usable interfaces and user-friendly dialogs, as pointed out by Temkin et al. [10] and Nielsen [11]. In this vein our research on usability in B2B environments is of particular theoretical and practical interest because B2B users often cannot easily change suppliers. Stressors from the work environment then may affect work-related variables such as job attitudes, turnover intension, and work performance [12]. The negative effect on health and job-related outcomes seems most significant in B2B conditions when there is no autonomy or possibility to change the web systems (cf. demand-resource models; [13]). Next, it can be surmised that B2B systems with low usability will enhance the avoidance behavior and motivate the user to refuse system use. While empirical research in B2C systems provides support for this relationship, there is no corresponding research in the B2B sector. Given detrimental effects of low usability on user’s strain and system adoption, poorly designed systems might fail to exploit beneficial cost saving potentials.

The present study serves two broad purposes: (1) examine the role of usability on strain and transaction behavior in B2B e-commerce information systems that—to the best of our knowledge—have not been examined in previous research and (2) determine anteceding and accompanying conditions of usability in B2B e-commerce information systems. Scholars have suggested a wide range of usability factors in user-interface design, including credibility, content, and response time, [14]. More recently, Konradt et al. [15] extended the antecedences by demonstrating that organizational support and information policy were positively related to users’ perceived ease of use [16] and negatively related to users’ strain in employee self-service systems. Following this line of research, we aim to examine technical antecedences (i.e., response time) and organizational antecedences (i.e., organizational support and information policy) which are possibly related to B2B system’s usability.

Finally, although predictors of usability and possible impacts on users’ strain have been examined in HCI and human factors research (see [8], for an overview), the theoretical links of usability to the stressor-strain process remain unclear. First evidence that usability is an important mediator derives from research showing that ease of use fully mediates the relation between organizational support and strain as well as between information policy and strain [15]. Beside the mediation model, other conceptions are theoretically plausible including a partial mediation model which adds a direct path from usability to users’ strain or a direct effect model regarding usability as an additional antecedence for users’ strain. Therefore, this research also addresses to advance theory by assessing the mediating role of usability.

2. Literature Review and Hypotheses

2.1. Usability: Concept, Antecedences, and Consequences

The concept of usability derives from the interdisciplinary field of Human-Computer Interaction [5, 17, 18] and has been defined as “extent to which a product can be used by specified users to achieve specified goals with effectiveness, efficiency, and satisfaction in a specified context of use.” Effectiveness describes the degree of accuracy and completeness to which the user is able to reach his task with an application. Efficiency refers to the ratio between the effectiveness and the effort that has to be invested to reach the goal. Thus, an application can only work efficiently if it can be used effectively. Satisfaction accounts for users’ acceptance and evaluation of an application [17].

2.2. The Importance of Usability in B2B E-Commerce Systems

Internet commerce businesses that sold primarily to the end customer (B2C) versus those who sell primarily or to other businesses (B2B) e-commerce websites have similar features, but some characteristics differ. In B2C the relationship to the consumer is largely dependent on the brand identity. Thus, emotional buying decisions and features which make the purchase as easy and comfortable as possible for different segments of consumers are important. Also, B2C e-commerce websites employ supplementary merchandising activities and provide aesthetically pleasing features which heighten the enjoyment of the visiting and buying in order to keep customers coming back.

In B2B, in contrast, the relationship to the consumer largely depends on a contractual basis, and brand identity is for the most part created on personal relationship. Buying decisions are more rational based on business value and often reflect long-term business relationships that includes support, followup, and future enhancements and add-ons. Users might thus be restricted to voluntary choice a shop and complete their purchase. Also, sites’ products and services are often extremely specialized. For this reason, B2B websites typically provide a much wider range of information and more detailed information on products and services (e.g., in-depth white papers and specifications). Although aesthetically pleasing features are rather irrelevant, simple and user-friendly menu structures should assist the user in finding the right information easily and effectively, and details should be readily accessible. For these reasons, usability is important in B2B systems.

2.3. On the Meaning of Usability in B2B Websites

While these effects of usability have been investigated within the B2C context, effects of in B2B have rarely been examined. Consequently, we start from B2C research (e.g., [14, 19]) in order to develop our hypotheses. Research in B2C e-commerce information systems suggests usability as an important success factor [2022]. Berthon et al. [23] recognized that the usability of a website is critical in converting site visitors from “lookers” to “buyers.” Ghose and Dou [24] studied interactive functions in websites and found that the greater the degree of interactivity (leading to higher usability of information) in a website, the higher is the website’s attractiveness.

One option to positively influence usability is to provide organizational support which influences the employees’ perception and attitude [25]. Organizational support is defined “as the extent to which top and middle management allocates adequate resources to help employees to achieve organizational goals, for example, by providing training and technical support facilities” [15, page 1143].

A related construct to organizational support is information policy which refers to “an organization’s strategy for communicating their principles and priorities of information usage, and which information management principles are relevant to establishing the organization’s goals relating to costeffectiveness, knowledge management, and organizational culture” [15, page 1143]. Similar to organizational support, information policy contributes to employees’ expectations regarding system usage. Results showed that information policy is positively related to user participation, involvement, and behavioral engagement (e.g., [15, 26, 27]).

Konradt and colleague [15] examined the impact of organizational support and information policy on system usage, user satisfaction, and perceived strain in business information systems. Organizational supports’ capability to lower barriers to system adoption is realized through metastructuring actions [2830] which include workflow patterns, work procedures, routines, organization structures, control, and coordination mechanisms as well as reward systems. On the other hand Sharma and Yetton [30] argue that management support is considered a necessary and critical, but not sufficient component in explaining variance and that hypothesizing a simple main effect does not reflect the variety of possible other plausible relations.

Another group of important factors in the user-system interaction process pertains to technical system characteristics. In their framework, Tsakonas and Papatheodorou [31] suggest response time as a performance criterion which determines the user-system interaction. Accordingly, Nielsen [18] proposed response time, although Palmer [21] recommended to use the term download delay [32] to separate the input factor from the responsiveness, the former term will be kept because download delay includes the time the request needs to reach the server, the processing time, the time the data needs to get back to user, and lastly the time the user’ machine needs for displaying the data, as a design principle and input factor for usability, and—based on empirical results—suggested 0.1 seconds as an ideal response time during which the user does not notice any interruption, one second as the highest acceptable response time, and 10 seconds as unacceptable. Palmer [21] also refers to the work of Rose and Straub [33] and Nielsen [18] when considering download delay as one key aspect of usability which is easily measured, important to users, and significantly related to website success.

Regarding response time, two different measures should be distinguished. Subjective server response time (SSRT) is the subjectively perceived time between requesting a page and receiving it. By contrast, objective server response time (OSRT) is defined as the time needed for the server to process a request and send a response. In OSRT, transmission times (network delay) will not be taken into account since factors like firewalls, bandwidth sharing, high workload on user machine, and heavy Internet traffic can impede the time between user request and display of the server response and these factors are not liable to the providers’ influence. Marshak and Hanoch [34] identified the user-perceived latency as the central performance issue in the World Wide Web. Recently, Tsakonas and Papatheodorou [31] explored usefulness and usability in open access digital libraries and demonstrated that response time is not predictive in user performance over and above functionalities of recall and precision. Results indicate that “users are committed to spend as much time and effort is needed in using (the system) to find the information they want” [31, page 1246]. However, although research shows that the importance of server response times seems to vanish as technological improvements advance into everyday life [20], the question remains if these findings apply in a business environment where time pressure and high workloads may influence perceptions and behavior. As such, the following hypotheses are offered.

Hypotheses from 1 to 4. In B2B e-commerce information systems, usability is positively related to organizational support (H1), positively related to information policy (H2), negatively related to subjective server response time (H3), and negatively related to objective server response time (H4).

2.4. Strain

According to DIN EN ISO 10075-1 [35] psychological stress is defined as “the total assessable influence impinging upon a human being from external sources and affecting it mentally,” whereas strain is defined as “the immediate effect of mental stress on the individual (not the long-term effect) depending on his/her individual habitual and actual preconditions, including individual coping styles.” Reviews of pertaining research have established the meaning of computer usage with respect to peoples’ health and wellbeing [8, 9]. Although usability has seldom been coupled with occupational health models, the conceptual similarities between usability and strain appear evident. First, research on human-computer interaction showed that mental models helped to apply and control software [36]. Consequently, computer systems which are less effective and efficient will require higher user cognitive effort because expectations regarding a system mental model are not met. As a result, this would lead to negative consequences, such as user errors, user frustration, and aversive stress reactions (e.g., [37, 38]). Secondly, controltheory [39] suggested that users who are exposed to systems with poor ergonomic design or are faced with system malfunctions would feel a loss of control if they are unable to avoid them [40, 41]. Moreover, HCI research suggests an influence of emotional aspects on technology acceptance [42], and unpleasant emotional reactions, such as frustration, loss of confidence, and anger may emerge that will have detrimental effects on performance [43, 44]. More evidence derived from studies on the Technology Acceptance Model [16] showing that perceived ease of use and perceived strain were negatively related [15, 45]. Specifically, Konradt et al. [15] demonstrated a positive relation of perceived ease of use with user satisfaction which represents a facet of usability. Moreover they found that the relations of organizational support and information policy with perceived strain were fully mediated by ease of use. Based on control theory, antecedences are expected to lower perceived strain because information on the project and the coming implementation enhance the feeling of control. Likewise, personnel support and hotlines help people to develop positive attitudes toward the change process and feelings to cope with the demands. Therefore, it is hypothesized.

Hypothesis 5. In B2B e-commerce information systems, usability is negatively related to strain.

Hypothesis 6. In B2B e-commerce information systems, usability fully mediates the relations between organizational/technical antecedences (i.e., organizational support, information policy, subjective server response time, and objective server response time) and user’s strain.

2.5. Transactions (Purchase)

In e-commerce information systems research, several subjective and objective success measures have been used including the conversion rate which describes the percentage of web-shop visitors that make a sale directly on the website [46], the intention to cancel an online transaction [47], the intention to buy [48, 49], and the actual transactions, that is, purchases that are carried out [7, 50, 51]. Evidence indicates that usability has an impact on the intention to buy [4952] and the actual purchase [7, 50, 51]. Despite the fact that the consumer’s decision to buy (or actual purchase) is generally considered as a fundamental indicator of success almost all studies draw exclusively on questionnaire data on the intention to buy instead of assessing the actual purchase behavior. Theoretically, Helander and Khalid [53] proposed that the process of transactions is constituted by five consecutive decisions, including the decision to visit a shop, to search for a product or service, to start the ordering, to complete the ordering, and to keep the product after delivery. The authors recognize that often the ordering process is started but not completed, resulting in termination of the process. Drawing on this cascading conception, we use the decision to complete the ordering (purchase) as a valid criterion.

The predictive role of usability on user behavior in B2C e-commerce applications possibly points to the fact that users can choose from a variety of vendors and the competitor “is only one click away” [53]. As noted above, B2B e-commerce applications are typically based on long-term contractual arrangements made between companies which oblige employees (viz. users) to use systems and excuse them to abort a transaction process [47]. As mentioned by Moe and Fader [46], this might even be given in systems with poor usability. While poor B2B system usability is suggested to have detrimental effects on user’s strain, purchase should thus neither be affected by usability nor by strain. Thus, we anticipate the following.

Hypothesis 7. Purchase in B2B e-commerce information systems is unrelated to usability and user’s strain.

3. Method

3.1. Sample

Participants were employees from companies who are contractually bound to purchase books and media via online shopping portals provided by booksellers. A change of the supplier is difficult because bookseller and company often have a long-term business relationship and exclusive supply contracts. The sample consisted of 491 employees—three hundred fifty-seven participants saved complete sets of data, while 141 included missing data (6.7% on average, after two empty datasets and five datasets with more than 30% missing data were excluded). Subsequently, missing data were imputed based on NORM using the EM algorithm (cf. [54]), resulting in a total of 491 datasets. No significant discrepancies between the full and the reduced sample were identified regarding the distributions, means, and standard deviations of the variables—70.3% females and 29.1% males (0.6% missing data) with an average age of 40.3 years (SD = 12.3). The participants had an average Internet experience of 11.4 years (SD = 5.30), an average Internet usage of 28.5 hours per week (SD = 17.9) while using the computer for an average of 39.1 hours per week (SD = 18.4) for private and professional means—70.5% of the participants use the Internet to surf, 8.2% use it for research, 10.2% had an own homepage, 90.4% use it for emails, 10.4% read newsgroups, 10% use it for chat, 58% use it for Internet banking, and 41.3% for miscellaneous tasks.

3.2. Measures

Unless stated otherwise, items were rated on a 7-point rating scale, ranging from 1 = strongly disagree to 7 = strongly agree.

3.2.1. Organizational Support

Two items from Konradt et al. [15] were used to measure organizational support “I am satisfied with the available hotline for problems in using [system],” and “I am satisfied with the on-line help functions of [system].” In the present study, the reliability (α) of the scale was .88.

3.2.2. Information Policy

Also taken from the Konradt et al. [15] study, two items were used to measure information policy “I was informed about the implementation of [system] early enough” and “I received sufficient information about the implementation of [system].” Reliability (α) of the scale was .81.

3.2.3. Server Response Time

The objective server response time (OSRT) as the time needed to process a website request in the server was recorded. The request was determined as the time between the earliest entry point on the lowest possible OSI-layer (Open Systems Interconnection Reference Model; [55]) and the time that was needed until the server sent and recorded its response.

Subjective server response time (SSRT) was measured by a single item taken from the formative usability scale of Christophersen and Konradt [50] “It takes too long for the store to react to my input” ( 𝑅 ). Although single-item measures are often regarded as inappropriate for multidimensional constructs, system usage is a concrete and completely clearcut construct (cf. [56]). Additional items would change the conceptual meaning and result in a loss of content validity. Empirical evidence demonstrated good reliability and validity of single-item measures both with attitudinal and behavioral scales (e.g., [5759]).

3.2.4. Usability

A review of the literature on the measurement of the usability construct reveals that usability has been conceptualized in a variety of ways. While most measures broadly determine usability (see [60], for a review), the Usability Questionnaire for Online-Shops (UFOS; [49, 50]) specifically addresses online shops and was systematically validated in the B2C sector, including web-based book and media-sellers [49], online shops for media, printer cartridges, concert tickets [50], and websites of health insurance companies [51]. Usability was measured with the reflective 8-item scale (Ufos v2r) from Christophersen and Konradt [50]. Sample items were “The handling of the [system] is easy to learn” and “It is too complicated for me to use this store” ( 𝑅 ). The reliability of the original measure was α=.94. Here, we reached an alpha of .93.

3.2.5. Strain

Four items from Schäffer-Külz [61] were used to measure strain. Items were “When handling the “[system], I feel strained,” “The [system] makes my work easier” ( 𝑅 ), “The [system] demands a high concentration,” and, Handling of the [system] is easy to me” ( 𝑅 ). In the current study, the scale demonstrated good reliability (α = .90).

3.2.6. Purchase

During system use, orders placed by the customers and log out dates were stored. Purchase was binary coded as 0 = log out without an order or 1 = place an order.

3.2.7. Controls

Age (in years) and gender were included as control variables because IS research literature in general notes possible effects on usage behavior and preferences of online customers (e.g., [62]). The degree of user experience was generally considered as another essential individual factor [63, 64]. While some empirical studies support that inexperienced users rate the usability of complex interfaces lower than experienced users (e.g., [65]), others indicate that highly experienced users tend to evaluate the usability more critically [66]. Computer experience was measured with two items asking for the time spent weekly with the Internet (e.g., surfing, e-mail, newsgroups, chat, and Internet banking) and with computers outside the working domain. Internal consistency reliability was .94.

3.3. Procedure

To prevent sampling bias, the total population of 76 booksellers were informed by the provider (a wholesale trader) about the survey within a monthly newsletter and were asked to participate. All but 5 sellers agreed. During the survey period of five weeks, 666 registered users logged into 36 booksellers portals. 498 users agreed to participate in the survey, and 357 saved complete sets of data, while 141 included missing data (6.7% on average after two empty and five datasets with more than 30% missing data were excluded), resulting in a total of 491 datasets (return rate of 74.8%).

The questionnaire was presented to the users either at the end of an order process (viz. purchase) or when logging out of the system without placing an order. A voucher in the amount of 40€ was advertised for study participation. The participants had the options (a) to fill out and save the questionnaire, (b) to take part in the survey later, or (c) not to take part at all (then the application would be closed and never be presented to the user again). After choosing an alternative, the user was flagged not to be offered the questionnaire again. The questionnaire included items on demographic information (as listed above), antecedences, usability, and strain. To avoid possible question order effects which have been found in self-administered surveys (e.g., [67]) and subjective information system evaluations [15], items were presented in blockwise random order. The provider kept track of all user movements in the portal, and movements were logged on a level where it was possible to retrace the websites the user requested. A pretest with ten test employees from the wholesale company revealed that all items were clearly structured and, easy to understand and answering required about 5 minutes to complete.

3.4. Analyses

Prior to hypotheses testing, we examined whether there was support for the six-factor structure of our hypothesized model. Using confirmatory factor analysis (CFA), we specified a model whereby 4 items loaded on the strain scale, two parcels—to reduce the model complexity and to preserve sufficient power, we conducted the SEM analyses on a partial disaggregation model [68] by creating two parcels of the eight strain items as recommended by Little et al. [69], following the procedure of Hall et al. [70] —loaded on the usability scale, two items loaded on organizational support and information policy in each case, and single items on objective and subjective server response time. Because of the conceptual difference between response time items and the other latent variables, we also tested a more restricted four-factor model, excluding the response time items. AMOS 5.0 [71] was used to assess the fit of the six-factor and the four-factor model, and each fit was compared to a one-factor model. We used maximum-likelihood estimation and reported the results of respective fit indices (cf. [72]): chi-square statistic, the root-mean-square error of approximation (RMSEA), standardized root square mean residual (SRMR), comparative fit index (CFI), and the normed fit index (NFI).

The fit indices for the CFA of the six-factor model (including subjective and objective server response time) were 𝜒 2 (41, 𝑛 = 4 9 1 ) = 51.69, CFI = 1.00, SRMR = .05, RMSEA = .02 (CI: .00, .04), and NFI = .98. The fit indices for the one-factor model were 𝜒 2 (54, 𝑛 = 4 9 1 ) = 1598.94, CFI = .42, SRMR = .34, RMSEA = .24 (CI: .23, .25), and NFI = .42. A chi-square test of differences confirmed that the two-factor model is a better fit to the data than a one-factor model: Δ 𝜒 2 (1, 𝑛 = 1 4 1 ) = 71.40, 𝑃 < . 0 1 . Very similar results were obtained for the four-factor model (excluding subjective and objective server response time: 𝜒 2 (29, 𝑛 = 4 9 1 ) = 33.77, CFI = 1.00, SRMR = .05, RMSEA = .02 (CI: .00, .04), and NFI = .99, versus a one-factor model: 𝜒 2 (35, 𝑛 = 4 9 1 ) = 1567.58, CFI = 0.43, SRMR = .24, RMSEA = .30 (CI: .29, .31), and NFI = .42). Thus, the results of the CFAs clearly showed that the measurement model proposed by our study was supported.

Second, we demonstrated discriminant validity of the six versus four factors by using the procedures suggested by Fornell and Larcker [73]. The results (see Table 1) showed that the average variance extracted by the measure of each factor is larger than the squared correlation of that factor’s measure with all measures of other factors in the model which indicates that all factors in the measurement models possess strong discriminant validity.

Table 1: Means, standard deviations, average variance extracted, reliability estimates, and intercorrelations among study variable.

Finally, we addressed the fact that measures came from the same source and any deficiency in the source contaminates measures [74, 75]. Thus, we introduced a factor to the measurement model that represented common method variance (on which all of the items of the constructs were allowed to load; cf. [74]). Findings revealed that all factor loadings of the constructs under examination remained significant in the six-factor model, and all but one of the factor loadings of the constructs remained significant in the four-factor model, which indicated that common method variance hardly distorts the construct validity of our measures.

4. Results

Means, standard deviations, and bivariate correlations are presented in Table 1. Table 2 presents the results of the path analyses for the fully mediated, partially mediated, and direct effects models described above, respectively. As shown by the fit statistics, all models provide good-to-excellent fit to the data. In the hypothesized fully mediation model, three antecedences have paths to usability, and usability has a path to strain. In other words, this model postulates that usability fully mediates the relationship between relevant antecedences and perceived strain. The results show that this model fit the data very well. We conducted chi-square tests of difference between the fully mediated model, and the partially mediated and the direct effects models, respectively. The results of the comparison of fully mediated model and the partial mediated models revealed that the partial mediated model was not a better fit for the data (range from Δ 𝜒 2 (6) = 3.92, ns., to Δ 𝜒 2 (4) = 0.81, ns.). Likewise, the results of the comparison of the fully mediated model and the direct effects models revealed that the fully mediated model was a better fit for the data (range from Δ 𝜒 2 (4) = 9.75, 𝑃 < . 0 5 , to Δ 𝜒 2 (1) = 15.03, 𝑃 < . 0 0 1 ). In sum, these results provide support for the hypothesized fully mediation model.

Table 2: Summary of fit indices.

The standardized path coefficients for the fully mediated model are shown in Figure 1. As shown, organizational support ( 𝛽 = . 3 4 , 𝑃 < . 0 1 ), information policy ( 𝛽 = . 2 2 , 𝑃 < . 0 5 ), and subjective server response time ( 𝛽 = . 2 5 , 𝑃 < . 0 5 ) held significant path coefficients with usability when controlling for users’ age, gender and computer experience, supporting Hypotheses 1 to 3. The strongest relations pertain to organizational support. Hypothesis 4, which predicted that objective server response time would predict usability received no support and was rejected. Consistent with Hypothesis 5, usability held a significant path coefficient with perceived strain ( 𝛽 = . 2 5 , 𝑃 < . 0 5 ) and fully mediated the impact of the three antecedences on perceived strain, as stated by Hypothesis 6. Based on the Maximum Likelihood estimates provided by AMOS, a series of Sobel [76] tests for the significance of the mediated paths was conducted. Results revealed significant indirect effects of organizational support, organizational support, and information policy on strain.

Figure 1: Path coefficients for the hypothesized full mediation model. Note: * 𝑃 < . 0 5 . ** 𝑃 < . 0 1 . Statistics are standardized path coefficients. Hypothesized effects are depicted as straight lines, and control effects as dotted lines. Predictors were allowed to intercorrelate, as were the error terms of the usability parcels.

Finally, Hypothesis 7 was supported, as purchase was unrelated to users’ perceived usability ( 𝛽 = . 0 1 , 𝑃 = . 9 0 ) and strain ( 𝛽 = . 0 4 , 𝑃 = . 6 1 ). The full mediation model (see Table 2) explained 28% of the variance in usability, and 7% of the variance in perceived strain.

5. Discussion

The purpose of this study was to explore the impact of organizational support, information policy, subjective server response time, and objective server response time as antecedences to perceived usability, perceived strain and usage of B2B systems. The main findings were that organizational support and information policy were positively, and subjective server response time was negatively related to usability when controlling for users’ age, gender and computer experience. Contrary to hypotheses, objective server response time did not predict usability. Usability held negative relations to perceived strain and fully mediated the impact of the three significant antecedences on perceived strain while user’s transactions (i.e., purchases) were not predicted.

The results on antecedences are consistent with research findings suggesting that organizational support and information policy impact the ability to achieve both individual and organizational goals [30, 77, 78] and have positive relation to ease of use in B2C applications [15]. Sharma and Yetton [30] noted that although the IS research literature in general suggests positive effects of organizational and management support on information technology implementations little empirical evidence is available in support of this conjecture. Thus, our result adds empirical support for this relationship. Beside, SSRT as a technical factor contributed to the perceived usability, while OSRT did not. While this is counter to other studies which have found OSRT to be a negative predictor of usability [18], results are consistent with Marshak and Hanoch’s [34] study, showing that the user perceived latency as the central performance issue in the World Wide Web. Likewise, Tsakonas and Papatheodorou [31] demonstrated that response time is no critical measure of performance in an open access digital library system. Following suggestions by Nielsen and Loranger [20], we surmise that the importance of server response times seems to vanish as technological improvements advance into everyday life, users’ expectations on web-based business applications influence perceptions, and behavior.

Furthermore, findings of this study suggest that usability is negatively related to users’ strain in B2B systems, while no behavioral consequences in use or nouse are shown. These results are counter to studies that have found a high predictive validity of the usability on purchase in B2C systems [6, 7, 50]. Given these findings, we assume that contrary to B2C contexts, the use of B2B applications is mandatory and users do not have a decision to determinate the transaction (cf. [53]). Another explanation would be that—different from usability aspects—usefulness factors are key attributes including reliability and currency [79] or factors related to the supplier’s delivery conditions and the variety of services are stronger predictors of transactions in B2B applications [1]. Thus, additional research is needed to further explore the interplay between usability and usefulness as causes for differences in users’ behavior in B2B and B2C systems.

Results also provide theoretical evidence that usability fully mediates the relations between the antecedences and user’s strain in B2B systems, which confirms initial results with IS management applications [15]. Moreover, this finding is consistent with Hacker’s action theory (see [80], for a review) that poor usability hinders the work regulation process, enhances obstacles, obstructions, and interruptions, and thus requires additional cognitive effort. Future research should continue to investigate the reasons of the appearance of usability effects.

Finally, the present research also makes a contribution to the measurement of usability in e-commerce information systems [60]. Recently, Christophersen and Konradt [50] presented and validated reflective and formative scales for the measurement of usability in B2C settings and called for replication to determine generalizability of these results. The present research extends findings by suggesting that the reflective measure shows excellent reliability, discriminant validity, and predictive validity and thus is suitable to quickly and concisely capture perceived usability of information systems in B2B settings.

5.1. Practical Implications

Overall, the results provide broad guidelines for the management to better implement B2B e-commerce information systems. Findings show that the explained variance in usability is considerable high. Whereas most research on usability focuses on the design, results confirm the significance of good practice in implementing information systems, specifically organizational support. Employee opinion surveys and feedback systems should be employed to routinely assess or monitor employee user’s usability perceptions and strain. Moreover, personnel development trainings which are related to a current human resource practices, services, and work design compatible with job enrichment and which are conducive to individual growth and health [8] could be used to better prepare employees for the demands and to develop more positive attitudes towards B2B system usage.

5.2. Limitations and Strengths

As with any study, this study has a number of limitations. First, variables were measured at the same time from the same source. While common method variance cannot be fully ruled out, findings from the general factor test (cf. [74]) reduces this concern. Moreover the differential pattern of relationships between our measures lends support to the assumption that common method variance is not a major limitation of this study. Although research has generally shown that common methods bias does not automatically invalidate theoretical interpretations and substantive conclusions (e.g., [8183]), future research should validate the results of this study using different information sources. This would also affect the use of “objective” in combination to “subjective” measures on users’ strain, although users’ ratings have found to be of considerable value in ergonomics research and practice [84].

Second, we used existing B2B information systems within the domain of booksellers which might decrease the validity of our results in two respects. Although bookseller portals were selected because users have highly specific ideas what they expect from a portal and have wide-ranging prior computer experience (cf. [85]), we could not cover the diversity of B2B e-commerce applications including interorganizational, multiorganizational, and extraorganizational systems. In a strict sense we can draw conclusions only to e-procurement information systems. Next, even though we controlled for user demographics and user experience, the portals we used might be affected by a bundle of supplementary variables. Further examinations of the hypotheses with information systems in other B2B areas, and mock online applications which would allow generating systems with different levels of usability will help to address this possible limitation.

Third, the quantitative analyses of the research model were based on cross-sectional data. Consequently, causal interpretations of path coefficients have to be considered under reserve. The advantages of longitudinal studies have been repeatedly described. Burkholder and Harlow [86], for instance, argue that longitudinal studies allow detecting patterns of covariation over time, testing potential causality, and testing relative construct stabilities of variables. Future research design efforts should be made that allow causal conclusions in order to extend the understanding of online customer decision making processes.

The strength of this study is that we provide initial evidence for the impact of usability within B2B systems using a nomological network [87] which includes various antecedences, consequences, and controls. Additionally, to conduct conservative tests of the hypotheses, we controlled for users’ age, gender, and computer experience. As the unique paths in the structural equation model can be interpreted as partial correlations, the internal and external validity of our main findings are improved. Second, instead of subjective measures of the intentional measures, we used a more valid and compelling approach to assessing the impact of usability and strain by examining an objective measure of user behavior (i.e., the purchase). Finally, we advance theory and provide more convincing evidence of the mediating role of usability which explains the antecedence-outcome relations.

5.3. Conclusions

The current research represents an initial step to deal with a systematic conceptualization of usability and strain in B2B e-commerce information systems. This study is the first which examines usability and strain issues in B2B systems. Given this uniqueness it needs to show that our results are replicable and can be generalized on other B2B contexts. Results point out that usability is an important predictor of users’ perceived strain during interacting with B2B applications and allows providing guidelines regarding software design and the implementation in organizations. Specifically, B2B systems should follow a preventive work design strategy by providing organizational support and information to the users which will result in better usability and lower users’ strain. The fact that the purchases are not predicted by usability and strain points out that users are probably obliged to employ the system, while at the same time it is distressing and the dialogue is accompanied by perceptions of low efficiency, effectiveness, and satisfaction.

References

  1. R. Eid, M. Trueman, and A. M. Ahmed, “A cross-industry review of B2B critical success factors,” Internet Research, vol. 12, no. 2, pp. 110–123, 2002. View at Publisher · View at Google Scholar · View at Scopus
  2. A. J. Cullen and M. Taylor, “Critical success factors for B2B e-commerce use within the UK NHS pharmaceutical supply chain,” International Journal of Operations and Production Management, vol. 29, no. 11, pp. 1156–1185, 2009. View at Publisher · View at Google Scholar · View at Scopus
  3. M. E. Jennex, D. Amoroso, and O. Adelakun, “E-commerce infrastructure success factors for small companies in developing economies,” Electronic Commerce Research, vol. 4, pp. 263–286, 2004. View at Publisher · View at Google Scholar
  4. S. M. Lee and T. Cata, “Critical success factors of web-based e-service: the case of e-insurance,” International Journal of E-Business Research, vol. 1, no. 3, pp. 21–40, 2005. View at Publisher · View at Google Scholar
  5. DIN EN ISO 9241-11, Ergonomic requirements for office work with visual display terminals—Part 11: Guidance on usability, 1998.
  6. P. A. Pavlou and M. Fygenson, “Understanding and predicting electronic commerce adoption: an extension of the theory of planned behavior,” MIS Quarterly, vol. 30, no. 1, pp. 115–143, 2006. View at Scopus
  7. V. Venkatesh and R. Agarwal, “Turning visitors into customers: a usability-centric perspective on purchase behavior in electronic channels,” Management Science, vol. 52, no. 3, pp. 367–382, 2006. View at Publisher · View at Google Scholar · View at Scopus
  8. M. D. Coovert and L. F. Thompson, “Technology and workplace health,” in Handbook of Occupational Health Psychology, J. C. Quick and L. E. Tetrick, Eds., pp. 221–241, APA, Washington, DC, USA, 2002.
  9. K.-C. Hamborg and S. Greif, “New technologies and stress,” in Handbook of Work and Health Psychology, M. J. Schabracq, J. A. Winnubst, and C. L. Cooper, Eds., pp. 209–235, John Wiley & Sons, Chichester, UK, 2nd edition, 2003.
  10. B. D. Temkin, H. Manning, M. Dorsey, and H. Lee, “Web sites continue to fail the usability test,” 2003, http://www.usabilitytesting.org/.
  11. J. Nielsen, “B2B usability,” 2006, http://www.useit.com/.
  12. N. P. Podsakoff, J. A. Lepine, and M. A. Lepine, “Differential challenge stressor-hindrance stressor relationships with job attitudes, turnover intentions, turnover, and withdrawal behavior: a meta-analysis,” Journal of Applied Psychology, vol. 92, no. 2, pp. 438–454, 2007. View at Publisher · View at Google Scholar · View at Scopus
  13. A. B. Bakker and E. Demerouti, “The job demands-resources model: state of the art,” Journal of Managerial Psychology, vol. 22, no. 3, pp. 309–328, 2007. View at Publisher · View at Google Scholar · View at Scopus
  14. J. Nielsen, Designing Web Usability, New Riders Publishing, Indianapolis, Ind, USA, 2000.
  15. U. Konradt, T. Christophersen, and U. Schäffer-Külz, “Predicting user satisfaction, strain and system usage of employee self-services,” International Journal of Human Computer Studies, vol. 64, no. 11, pp. 1141–1153, 2006. View at Publisher · View at Google Scholar · View at Scopus
  16. F. D. Davis, “Perceived usefulness, perceived ease of use, and user acceptance of information technology,” MIS Quarterly, vol. 13, no. 3, pp. 319–340, 1989. View at Scopus
  17. D. J. Gillan and R. G. Bias, “Usability science. I: foundations,” International Journal of Human-Computer Interaction, vol. 13, no. 4, pp. 351–372, 2001. View at Scopus
  18. J. Nielsen, Usability Engineering, Morgan Kaufmann, San Francisco, Calif, USA, 1993.
  19. J. Nielsen, “B-to-B users want sites with B-to-C service, ease,” B to B, vol. 90, no. 7, p. 48, 2005.
  20. J. Nielsen and H. Loranger, Web Usability, Addison Wesley, Munich, Germany, 2008.
  21. J. W. Palmer, “Web site usability, design, and performance metrics,” Information Systems Research, vol. 13, no. 2, pp. 151–167, 2002. View at Scopus
  22. M. Petre, S. Minocha, and D. Roberts, “Usability beyond the website: an empirically-grounded e-commerce evaluation instrument for the total customer experience,” Behaviour and Information Technology, vol. 25, no. 2, pp. 189–203, 2006. View at Publisher · View at Google Scholar · View at Scopus
  23. P. Berthon, L. F. Pitt, and R. T. Watson, “The World Wide Web as an advertising medium: toward an understanding of conversion efficiency,” Journal of Advertising Research, vol. 36, no. 1, pp. 43–54, 1996. View at Scopus
  24. S. Ghose and W. Dou, “Interactive functions and their impacts on the appeal of internet presence sites,” Journal of Advertising Research, vol. 38, no. 2, pp. 29–43, 1998. View at Scopus
  25. M. Fishbein and I. Ajzen, Belief, Attitude, Intention and Behavior: An Introduction to Theory and Research, Addison-Wesley, Reading, Mass, USA, 1975.
  26. H. Barki and J. Hartwick, “Measuring user participation, user involvement, and user attitude,” MIS Quarterly, vol. 18, no. 1, pp. 59–82, 1994. View at Scopus
  27. B. Ives and M. H. Olson, “User participation and MIS success: a review of research,” Management Science, vol. 30, pp. 586–603, 1984. View at Publisher · View at Google Scholar
  28. W. J. Orlikowski, J. Yates, K. Okamura, and M. Fujimoto, “Shaping electronic communication: the metastructuration of technology in the context of use,” Organization Science, vol. 6, pp. 423–444, 1995. View at Publisher · View at Google Scholar
  29. R. L. Purvis, V. Sambamurthy, and R. W. Zmud, “The assimilation of knowledge platforms in organizations: an empirical investigation,” Organization Science, vol. 12, no. 2, pp. 117–135, 2001. View at Scopus
  30. R. Sharma and P. Yetton, “The contingent effects of management support and task interdependence on successful information systems implementation,” MIS Quarterly, vol. 27, no. 4, pp. 533–555, 2003. View at Scopus
  31. G. Tsakonas and C. Papatheodorou, “Exploring usefulness and usability in the evaluation of open access digital libraries,” Information Processing and Management, vol. 44, no. 3, pp. 1234–1250, 2008. View at Publisher · View at Google Scholar · View at Scopus
  32. G. M. Rose, H. Khoo, and D. W. Straub, “Current technological impediments to business-to-consumer electronic commerce,” Communications of the AIS, vol. 1, no. 16, pp. 1–74, 1999.
  33. G. M. Rose and D. W. Straub, “The effect of download time on consumer attitude toward the e-Service retailer,” E-Service Journal, vol. 1, no. 1, pp. 55–76, 2001. View at Publisher · View at Google Scholar
  34. M. Marshak and H. Levy, “Evaluating web user perceived latency using server side measurements,” Computer Communications, vol. 26, no. 8, pp. 872–887, 2003. View at Publisher · View at Google Scholar · View at Scopus
  35. DIN EN ISO 10075-1, Ergonomische Grundlagen bezüglich psychischer Arbeitsbelastung, 2000.
  36. J. M. Carroll and J. R. Olson, “Mental models in human-computer interaction,” in Handbook of Human-Computer Interaction, M. Helander, Ed., pp. 45–85, North Holland, Amsterdam, The Netherlands, 1988.
  37. J. R. Coyle and S. J. Gould, “How consumers generate clickstreams through web sites: an empirical investigation of hypertext, schema, and mapping theoretical explanations,” Journal of Interactive Advertising, vol. 2, no. 2, 2002.
  38. M. Otter and H. Johnson, “Lost in hyperspace: metrics and mental models,” Interacting with Computers, vol. 13, no. 1, pp. 1–40, 2000. View at Publisher · View at Google Scholar · View at Scopus
  39. W. T. Powers, Behavior: the Control of Perception, Benchmark Publications, New Canaan, Conn, USA, 2005.
  40. C. M. Allwood and S. Thomée, “Usability and database search at the Swedish Employment Service,” Behaviour and Information Technology, vol. 17, no. 4, pp. 231–241, 1998. View at Scopus
  41. V. Venkatesh, “Determinants of perceived ease of use: integrating control, intrinsic motivation, and emotion into the technology acceptance model,” Information Systems Research, vol. 11, no. 4, pp. 342–365, 2000. View at Scopus
  42. R. McCalla, J. N. Ezingeard, and K. Money, “A behavioural approach to CRM systems evaluation,” Electronic Journal of Information Systems Evaluation, vol. 6, pp. 145–154, 1993.
  43. S. Brave and C. Nass, “Emotion in human-computer interaction,” in The Human-Computer Interaction Handbook, J. A. Jacko and A. Sears, Eds., pp. 81–96, Erlbaum, Mahwah, NJ, USA, 2003.
  44. T. Partala and V. Surakka, “The effects of affective interventions in human-computer interaction,” Interacting with Computers, vol. 16, no. 2, pp. 295–309, 2004. View at Publisher · View at Google Scholar · View at Scopus
  45. N. A. Streitz and E. Eberleh, Mentale Belastung und Kognitive Prozesse Bei Komplexen Dialogstrukturen, Wirtschaftsverlag NW, Bremerhaven, Germany, 1989.
  46. W. W. Moe and P. S. Fader, “Dynamic conversion behavior at E-commerce sites,” Management Science, vol. 50, no. 3, pp. 326–335, 2004. View at Scopus
  47. J. Cho, “Likelihood to abort an online transaction: influences from cognitive evaluations, attitudes, and behavioral variables,” Information and Management, vol. 41, no. 7, pp. 827–838, 2004. View at Publisher · View at Google Scholar · View at Scopus
  48. S. L. Jarvenpaa, N. Tractinsky, and M. Vitale, “Consumer trust in an internet store,” Information Technology and Management, vol. 1, pp. 45–71, 2000. View at Publisher · View at Google Scholar
  49. U. Konradt, H. Wandke, B. Balazs, and T. Christophersen, “Usability in online shops: scale construction, validation and the influence on the buyers' intention and decision,” Behaviour and Information Technology, vol. 22, no. 3, pp. 165–174, 2003. View at Publisher · View at Google Scholar · View at Scopus
  50. T. Christophersen and U. Konradt, “The development of a formative and a reflective scale for the assessment of online store usability,” Behaviour & Information Technology. In press.
  51. U. Konradt, H. Held, T. Christophersen, and F. W. Nerdinger, “The role of usability in e-commerce service,” International Journal of E-Business Research. In press.
  52. D. Gefen and D. Straub, “The relative importance of perceived ease-of-use in IS adoption: a study of e-commerce adoption,” Journal of AIS, vol. 1, no. 8, pp. 1–30, 2000.
  53. M. G. Helander and H. M. Khalid, “Modeling the customer in electronic commerce,” Applied Ergonomics, vol. 31, no. 6, pp. 609–619, 2000. View at Publisher · View at Google Scholar · View at Scopus
  54. J. L. Schafer and J. W. Graham, “Missing data: our view of the state of the art,” Psychological Methods, vol. 7, no. 2, pp. 147–177, 2002. View at Publisher · View at Google Scholar · View at Scopus
  55. A. S. Tanenbaum, Computer Networks, Prentice Hall, Upper Saddle River, NJ, USA, 2003.
  56. J. R. Rossiter, “The C-OAR-SE procedure for scale development in marketing,” International Journal of Research in Marketing, vol. 19, no. 4, pp. 305–335, 2002. View at Publisher · View at Google Scholar · View at Scopus
  57. T. Christophersen and U. Konradt, “Reliability, validity, and sensitivity of a single-item measure of online store usability,” International Journal of Human Computer Studies, vol. 69, no. 4, pp. 269–280, 2011. View at Publisher · View at Google Scholar · View at Scopus
  58. J. P. Wanous, A. E. Reichers, and M. J. Hudy, “Overall job satisfaction: How good are single-item measures?” Journal of Applied Psychology, vol. 82, no. 2, pp. 247–252, 1997. View at Scopus
  59. J. P. Wanous and M. J. Hudy, “Single-item reliability: a replication and extension,” Organizational Research Methods, vol. 4, no. 4, pp. 361–375, 2001. View at Scopus
  60. J. Nielsen and R. L. Mack, Usability Inspection Methods, John Wiley & Sons, New York, NY, USA, 2004.
  61. U. G. Schäffer-Külz, Self-service-systeme und mitarbeiterportale [Ph.D. dissertation], University of Kiel, Kiel, Germany, 2004.
  62. A. Levin, I. P. Levin, and J. A. Weller, “A multi-attribute analysis of preferences: differences across products, consumers, and shopping stages,” Journal of Electronic Commerce Research, vol. 6, pp. 281–290, 2005.
  63. G. C. Bruner and A. Kumar, “Web commercials and advertising hierarchy-of-effects,” Journal of Advertising Research, vol. 40, no. 1-2, pp. 35–42, 2000. View at Scopus
  64. N. Park, R. Roman, S. Lee, and J. E. Chung, “User acceptance of a digital library system in developing countries: an application of the Technology Acceptance Model,” International Journal of Information Management, vol. 29, no. 3, pp. 196–209, 2009. View at Publisher · View at Google Scholar · View at Scopus
  65. W. Hampton-Sosa and M. Koufaris, “The effect of Web site perceptions on initial trust in the owner company,” International Journal of Electronic Commerce, vol. 10, no. 1, pp. 55–81, 2005. View at Scopus
  66. J. A. Jacko, A. Sears, and M. S. Borella, “The effect of network delay and media on user perceptions of web resources,” Behaviour and Information Technology, vol. 19, no. 6, pp. 427–439, 2000. View at Publisher · View at Google Scholar · View at Scopus
  67. N. Schwarz, R. Groves, and H. Schuman, “Survey methods,” in Handbook of Social Psychology, D. Gilbert, S. Fiske, and G. Lindzey, Eds., vol. 1, pp. 143–179, McGraw-Hill, New York, NY, USA, 1998.
  68. R. P. Bagozzi and J. R. Edwards, “A general approach for representing constructs in organizational research,” Organizational Research Methods, vol. 1, no. 1, pp. 45–87, 1998. View at Scopus
  69. T. D. Little, W. A. Cunningham, G. Shahar, and K. F. Widaman, “To parcel or not to parcel: exploring the question, weighing the merits,” Structural Equation Modeling, vol. 9, no. 2, pp. 151–173, 2002. View at Publisher · View at Google Scholar · View at Scopus
  70. R. J. Hall, A. F. Snell, and M. S. Foust, “Item parceling strategies in SEM: investigating the subtle effects of unmodeled secondary constructs,” Organizational Research Methods, vol. 2, no. 3, pp. 233–256, 1999. View at Publisher · View at Google Scholar · View at Scopus
  71. J. Arbuckle, Amos 5. 0 Update to the Amos User’s Guide, SmallWaters, Chicago, Ill, USA, 2003.
  72. L. T. Hu and P. M. Bentler, “Cutoff criteria for fit indexes in covariance structure analysis: conventional criteria versus new alternatives,” Structural Equation Modeling, vol. 6, no. 1, pp. 1–55, 1999. View at Publisher · View at Google Scholar · View at Scopus
  73. C. Fornell and D. F. Larcker, “Evaluating structural equation models with unobservable variables and measurement error,” Journal of Marketing Research, vol. 18, pp. 39–50, 1981.
  74. P. M. Podsakoff, S. B. MacKenzie, J. Y. Lee, and N. P. Podsakoff, “Common method biases in behavioral research: a critical review of the literature and recommended remedies,” Journal of Applied Psychology, vol. 88, no. 5, pp. 879–903, 2003. View at Publisher · View at Google Scholar · View at Scopus
  75. P. M. Podsakoff and D. W. Organ, “Self-reports in organizational research: problems and prospects,” Journal of Management, vol. 12, pp. 531–544, 1986.
  76. M. E. Sobel, “Asymptotic confidence intervals for indirect effects in structural equation models,” in Sociological Methodology, S. Leinhardt, Ed., pp. 290–312, American Sociological Association, Washington, DC, USA, 1982.
  77. W. Fuerst and P. Cheney, “Factors affecting the perceived utilization of information systems,” Decision Sciences, vol. 17, pp. 329–356, 1982.
  78. M. Igbaria, N. Zinatelli, P. Cragg, and A. L. M. Cavaye, “Personal computing acceptance factors in small firms: a structural equation model,” MIS Quarterly, vol. 21, no. 3, pp. 279–305, 1997. View at Scopus
  79. S. Buchanan and A. Salako, “Evaluating the usability and usefulness of a digital library,” Library Review, vol. 58, no. 9, pp. 638–651, 2009. View at Publisher · View at Google Scholar · View at Scopus
  80. W. Hacker, “Action regulation theory—a practical tool for the design of modern work processes?” European Journal of Work and Organization Psychology, vol. 12, pp. 105–130, 2003. View at Publisher · View at Google Scholar
  81. D. H. Doty and W. H. Glick, “Common methods bias: does common methods variance really bias results?” Organizational Research Methods, vol. 1, no. 4, pp. 374–406, 1998. View at Scopus
  82. P. E. Spector, “Method variance in organizational research: truth or urban legend?” Organizational Research Methods, vol. 9, no. 2, pp. 221–232, 2006. View at Publisher · View at Google Scholar · View at Scopus
  83. P. E. Spector and M. T. Brannick, “Common method variance or measurement bias? The problem and possible solutions,” in The Sage Handbook of Organizational Research Methods, D. Buchanan and A. Bryman, Eds., pp. 346–362, Sage, London, UK, 2009.
  84. J. Annett, “Subjective rating scales in ergonomics: a reply,” Ergonomics, vol. 45, no. 14, pp. 1042–1046, 2002. View at Publisher · View at Google Scholar · View at Scopus
  85. A. Hesse and V. Bracewell Lewis, “German online retail and travel forecast,” 2009, http://www.ecc-handel.de/.
  86. G. J. Burkholder and L. L. Harlow, “An illustration of a longitudinal cross-lagged design for larger structural equation models,” Structural Equation Modeling, vol. 10, no. 3, pp. 465–486, 2003. View at Publisher · View at Google Scholar · View at Scopus
  87. L. J. Cronbach and P. E. Meehl, “Construct validity in psychological tests,” Psychological Bulletin, vol. 52, no. 4, pp. 281–302, 1955. View at Publisher · View at Google Scholar · View at Scopus