Table of Contents
ISRN Education
Volume 2014, Article ID 736931, 11 pages
http://dx.doi.org/10.1155/2014/736931
Research Article

Tool Use in Computer-Based Learning Environments: Adopting and Extending the Technology Acceptance Model

1Center for Instruction Psychology and Technology, KU Leuven, 3000 Leuven, Belgium
2Leuven Language Institute, KU Leuven, 3000 Leuven, Belgium
3Interdisciplinary Research Team on Technology, Education and Communication-IBBT, KU Leuven-kulak, 8500 Kortrijk, Belgium

Received 13 November 2013; Accepted 29 December 2013; Published 11 February 2014

Academic Editors: M. Akman, S. Cessna, and K. Kiewra

Copyright © 2014 N. A. Juarez Collazo et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Abstract

This study adopts the Technology Acceptance Model (TAM) and extends it to study the effects of different variables on tool use. The influence of perceptions on tool use was studied in two different conditions: with and without explanation of the tool functionality. As an external variable, self-efficacy was entered in the TAM and the main research question thus focused on the mediating effects of perceptions (perceived tool functionality and perceived tool usability) between self-efficacy on the one hand and quantity and quality of tool use on the other. Positive effects of perceived usability on perceived functionality were hypothesized as well as positive effects of quantity and quality of tool use on performance. Positive effects were expected in the condition with explanation of the tool functionality. Ninety-three university students were provided with concept maps as the learning tools within a hypertext. Using path analysis, we found—similar to the TAM—a significant positive relationship between perceived usability and perceived functionality. Whereas perceived usability had a positive influence on the quantity of tool use, which positively influenced performance, perceived functionality had a negative influence on quantity of tool use. Self-efficacy showed a relationship with perceived usability only with the explained functionality condition.

1. Introduction

In the 1960s, the introduction of computers in education led to future expectations that they would solve learning issues [1] by possibly enabling personalized computer-based instruction, encouraging learners to take a more active role in their education, or simply employing online learning at all times. However, few of those predictions have been realized and a number of issues have emerged. One issue is related to the acceptance of new computer systems, such as text editors, e-mail, spreadsheets, and software in general [25]. Another issue is whether the tools within the system are used [2, 3, 6] and what variables influence their use [6]. Finally, there is an issue related to how learning outcomes/performance are affected by the way tools are used [7, 8].

What factors affect the acceptance of new computer systems has been successfully addressed by the Technology Acceptance Model (TAM) [3]. The TAM has been widely used and empirically validated over the last 20 years [4, 5]. Moreover, it is a predecessor and basis for newer models such as the TAM 2 [9], the unified theory of acceptance and use of technology (UTAUT) [10], and the TAM 3 [11]. In summary, when it comes to computer systems adoption, “the TAM is considered as a reliable, simple and parsimonious model” [12]. The TAM not only furthers the scope of the theory of reasoned action (TRA) [13] by emphasizing the acceptance of technology through the prediction of system adoption but also highlights the mediating role of perceptions. According to the TAM (Figure 1), a number of external variables, such as self-efficacy [2, 5, 14], influence two constructs that are considered to be the model’s most important factors [4, 5]: perceived usefulness (the degree to which one believes that using a particular system would enhance one’s performance) [2] and perceived ease of use (the degree to which one believes that using a particular system would be effortless) [2]. Perceived ease of use has been found to influence perceived usefulness. Together, these two perceptions determine “attitude towards use” (one’s evaluation of the desirability of using the system) [15], “behavioral intention to use” (one’s conscious plans to perform or not perform some specified future behavior) [16], and, eventually, “actual system use.”

736931.fig.001
Figure 1: The Technology Acceptance Model (TAM) [3].

However, the TAM mainly focuses on the adoption of a whole system and not on the use of tools within a system, more specifically computer-based learning environments (CBLEs). The TAM only examines variables leading up to the point where the system is used. Learning outcomes/performance is not further examined. Therefore, while the TAM provides the baseline to analyze the use of tools within a system and specifically a CBLE, a model that explores tool use in a more defined way is needed and could possibly be established.

While understanding system use is important, for instructional design it is even more important to understand what affects tool use. The second issue pertains to the use of the tool(s) within a system—CBLE—and what influences this use. Tools can be described as support devices added in a CBLE [17, 18] and aiming at individualized instruction [19]. Hence, different tools have different functionalities assisting learning and problem solving in diverse ways [20]. However, research in the last two decades has revealed that tool functionalities are not always grasped; tools tend to be neglected [6, 7, 17] amongst others because learners are often cognitively ill-equipped to adequately use the tools in constructing meaning [21, 22].

Informing learners about tools’ functionalities can be identified as a type of instructional intervention often referred to as advice [23, 24], additional information [25], or explanation of the tool’s functionality [26]. Offering this instructional intervention to learners may not only result in more optimal tool use [25, 27] but also moderate perceptions [27]. Other studies have also found that advice moderates self-efficacy beliefs in learners [28] and affects quality of tool use positively which in turn influence performance [24]. One could investigate the effect of an explanation or no explanation of the tool functionality on self-efficacy perceptions, tool use (quantity and quality), and performance. However, empirical studies of this kind are largely absent.

Perkins [6] lists three conditions that should be fulfilled in order to attain optimal tool use: having the tool(s) present and functional in a system, recognizing the functionalities of the tools and the relationships between the tools and learning, and being motivated to use the tools. Within Perkins’ [6] conditions, different variables that influence tool use can be identified. Not only does the design of the tools matter, but also at least two additional learner variables can also be targeted: first, if learners recognize the tools’ functionality and the relationships between the tools and learning. This knowledge entails, among other variables, adequate tool perceptions which are learners’ ideas, concept, and theories on tools and tool usage [29]. Second, self-efficacy is the key element of the social cognitive theory which explains how learners’ approaches to goals, tasks, and challenges are influenced [30] and hence a motivational variable that seems to influence learners’ motivation to use the tools [31].

Regarding research on perceptions, studies on the adoption of a system in general—using the TAM—have examined the mediating effects of perceptions on system use [2, 14, 32] and their relationships with self-efficacy [2, 14, 32]. Previous research has shown that learners do not simply react to nominal instructional stimuli as constructed by the designer; instead they often act through their perceptions or interpretations of the environmental stimuli [3335]. This argument proposed by the cognitive mediational paradigm is considered a useful extension to research on instruction [35, 36]. In this line of research, perception is considered to be an important variable in determining the effectiveness and efficiency of the instructional environment. However, regardless of the emphasis on perceptions as mediators in the cognitive mediational paradigm, the mediating role of perceptions in the context of tool use in CBLEs largely remains theoretical; research on the role of perceptions still remains scarce and the existing literature fails to describe the direction of perceptions on tool use [29, 37].

Regarding research on self-efficacy, studies on tool use in CBLEs have revealed that self-efficacy has an important influence on behavior [38], in this case, tool use behavior, but the direction of this influence is still inconclusive. For instance, while learners with high self-efficacy beliefs seem to use tools more frequently [39], research has also indicated that those learners with high levels of self-efficacy rely much less on external measures, in this case the use of tools [40]. Accordingly, the TAM has revealed that external variables such as self-efficacy may influence perceptions about the usefulness and ease of use of the system [5].

As aforementioned, the TAM does not explore the performance effects of using a system. In contrast, research exploring tool use in CBLEs has often studied tool use effects on performance (e.g., [17, 24, 41]). The use of tools has been explored from different perspectives: quantity (i.e., frequency and duration of tool consultation) (e.g., [23, 41]), quality (e.g., [18, 24]), or both [25, 42]. Results have revealed that quantity does not affect posttest performance while quality of tool use (i.e., answers provided to adjunct questions) does positively influence performance [42]. Viau and Larivée [41] found that quantity of tool use was the best predictor of performance, but they did not study quality. Empirical studies focusing on tool use could possibly explore both quantitative and qualitative aspects in order to have a deeper insight of how performance is affected by tool use.

Although relationships among the variables (implied in the aforementioned issues) have been previously examined, studies conducting simultaneous examinations of relationships among these variables are rarely found. Thus, guided by the previous research, an initial operational research model for tool use has been elaborated (Figure 2) in order to provide a clear presentation of the whole conceptual framework. This proposed research model, like the TAM, assumes that perceptions are important components of the model. Given the nature of the present context, the names and definitions of the perceptions have been adapted in order to make a distinction between perceptions for system adoption in general and perceptions for tool use in a CBLE [43]. Instead of “perceived usefulness,” the present model uses “perceived functionality”; in place of “perceived ease of use,” the model uses “perceived usability.” Perceived functionality is defined as “the degree to which a learner believes that using a certain tool would enhance his/her performance in order to reach a goal” and perceived usability is defined as “the degree to which a learner believes that a certain tool would be usable (capable of being used) and easy to use.” These definitions merge Davis’ definitions of perceived usefulness and perceived ease of use [2] and Goodwin’s definitions of system functionality and system usability [44].

736931.fig.002
Figure 2: Proposed research model for tool use.

In the present model, quantity, quality of tool use, and performance substituted the constructs of “attitude towards use” and “behavioral intention to use.” Previous research has done the same. Igbaria and colleagues [45] proposed a model in which attitude and behavior were substituted with system usage. They emphasized that perceptions are the major construct in the TAM. Pituch and Lee (2006) did the same and substituted “attitude towards use” and “behavioral intention to use” with “use for supplementary learning” and “use for distance education.” They made those changes because these constructs were necessary to reflect the specific purposes of the e-learning system that was studied. In addition, research exploring specifically the role of attitude in the TAM [4648] has revealed that attitude does not contribute to the overall variance of the dependent variables and hence could be disregarded. In regard to behavior, different studies [49, 50] have intentionally excluded behavior in order to analyze the direct effects of other variables on (system) usage. Already in 1985, a study revealed that other constructs (e.g., behavioral expectation) could be better predictors of self-reported performance. Cho and colleagues explored, for example, continued usage intention [51], Davis and Warshaw (1985) indicated that behavior could be better approached by other means, for example, observer reports of behavior, and Venkatesh and colleagues [10] concluded that future research should focus on identifying constructs that can further the scope of behavior. Accordingly, the present study explores the quantity and quality of tool use and performance which reveal the actual learners’ tool use behavior and the learning outcomes derived from tool usage.

Therefore taking the TAM as a baseline, the proposed model suggests that perceived usability affects perceived functionality. These two factors are influenced by one external variable: self-efficacy. Self-efficacy, along with perceptions, influences quantity and quality of tool use. Lastly, it is hypothesized that quantity and quality of tool use influence performance. Furthermore, the proposed model shown in Figure 2 was tested and compared over two groups of participants/conditions, one with explanation of tool functionality and the other without explanation of the tool functionality. This was done to see the possible moderating effects that the explanation of tool functionality could have.

Consequently, the present research was driven by the following questions. What is the mediating role of perceptions on tool use? How does self-efficacy affect tool use? And what is the moderating effect of the explanation of tool functionality? These questions were answered by testing six sets of hypotheses (abbreviated in Figure 2 as H1, H2, H3, H4, H5, and H6). The hypotheses are as follows.

Hypothesis 1. Perceived functionality is positively influenced by perceived usability () and self-efficacy ().

Hypothesis 2. Perceived usability is influenced by self-efficacy (H2).

Hypothesis 3. Quantity of tool use is positively influenced by perceived functionality () and perceived usability ().

Hypothesis 4. Quality of tool use is positively influenced by perceived functionality () and perceived usability ().

Hypothesis 5. Performance is positively affected by quantity of tool use () and quality of tool use ().

Hypothesis 6. The explanation of the tool functionality positively influences self-efficacy (), perceived functionality (), perceived usability (), quantity of tool use (), and quality of tool use ().

2. Method

In order to test the proposed model, a pre-posttest design was used for this study. The data was collected through questionnaires and log files obtained from a CBLE task. There were two experimental conditions: one with the explanation of the tool functionality and another one without explanation of the tool functionality.

2.1. Participants

The participants were students from a master’s preparatory program in educational studies at the KU Leuven in Belgium. From a population of 165 students, 93 participated in the study. They had an average age of 23 years and about 80% of them were female.

2.2. Instruments
2.2.1. Computer-Based Learning Environment

The CBLE consisted of 10 screens: the first two were introductory screens, seven screens comprised a hypertext, and the last one was the concluding screen. On the first page, participants had to enter their identifying data (name and student number); on the second page, participants were informed about the structure of the hypertext: they were told they were going to read a text from which they would be asked questions. Next, participants saw the hypertext, which was a scientific article—describing the importance of water in the planet—entitled Waarom water broodnodig is (Why water is essential) [52]. The article’s format comprised 1,544 words (divided into five paragraphs/sections) and two figures. In one of the conditions, participants were given an explanation of the functionality of the tool before accessing the tool and after reading the section paragraph (see Figure 3).

736931.fig.003
Figure 3: After the learner finished reading the paragraph and clicked on the “next page” button (volgende pagina), the explanation of the tool functionality in the gray box (see translation in box next to it) was displayed for the condition with explained tool functionality.

A concept map () was placed after each section. A concept map is a graphical tool for organizing and representing knowledge by means of relationships among concepts represented in a traditional hierarchical fashion [53, 54]. According to Ruiz-Primo [55] concept maps can vary in the degree of directedness. For instance a map with low level of directedness supplies no concepts, links, or structures while a high-directed concept map supplies concepts, links, and structures. The concept maps in the present study had a high degree of directness, meaning that concepts, links, and structures were provided; only three concepts were missing and had to be completed by the participants (see Figure 4). The concept maps had to be completed by typing the answers in a gray-shaded space below the concept map. Students were expected to learn about the importance of water on earth. This text was not part of the learners’ curriculum and was chosen for this study so as to raise environmental awareness.

736931.fig.004
Figure 4: Example of tool: concept map. Concepts are enclosed in boxes. The concepts in the boxes with numbers and question marks had to be completed by typing in the space below the concept map.
2.2.2. Self-Efficacy

To assess self-efficacy, seven out of the eight items were taken from the Motivated Strategies for Learning Questionnaire (MSLQ) [56]. One item which assessed self-efficacy for learning was excluded because it was redundant in the CBLE context of the present study. In order to balance the number of items between self-efficacy for performance and self-efficacy for learning one more item was added. This item was adapted from the Self- and Task-Perception Questionnaire (STPQ) [57]. The wording was altered to refer to the current task and context. The resulting questionnaire, which has already been validated and employed in previous studies with high reliabilities (Cronbach’s alpha scores above .80) (e.g., [26]) assessed self-efficacy for performance and learning. The questionnaire used a six-point Likert scale ranging from “strongly disagree” to “strongly agree.” The reliability of Cronbach’s alpha was .89. Examples of the statements include “I know which approach is the most adapted in order to successfully accomplish this task” and “I’m certain I will be able to understand the most difficult parts of the text.”

2.2.3. Perceived Functionality and Perceived Usability

The statements about “perceived usefulness of a system” () and “perceived ease of use of a system” () [2] were adapted to tool use: “perceived functionality” and “perceived usability,” respectively. Each statement was adapted and translated into Dutch and then revised by three different researchers using the translation/back translation method in order to avoid semantic problems [58]. Both questionnaires employed a six-point Likert scale: 1 represented “totally disagree” while 6 represented “totally agree.” Cronbach’s alpha scores were adequate: .76 for perceived functionality and .83 for perceived usability. Examples of the perceived functionality statements include “Answering the concept maps in a text will improve my performance” and “answering the concept maps in a text will be necessary for my learning.” Examples of the perceived usability statements include “Answering the concept maps in a text will be easy for me” and “I think that using concept maps in a text will be easy to use.”

2.2.4. Quantity and Quality of Tool Use

Log files were kept in a Microsoft Access database that contained participants’ identities and the number of seconds they spent using the tools. Thus, quantity of tool use was analyzed by the proportional time participants spent on the concept maps. Quality of tool use was analyzed by using the answers provided in the boxes on the concept maps. Since each concept map () had three boxes to complete, participants could obtain a maximum of 15 points (one point per correct answer). The answers on the concept maps were recorded as text files after the participant concluded the activity. They were retrieved and reviewed by three raters using a correction key with the possible answers. The interreliability for grading the concept maps showed outstanding agreement among the three raters (, ).

2.2.5. Learning Outcomes

Prior knowledge (related to the topic from the hypertext) was measured by a pretest in order to assess possible differences among conditions. Performance was evaluated through a posttest. Three researchers collaborated on developing the tests.

The pretest consisted of 10 multiple-choice questions exploring learners’ factual knowledge related to the hypertext’s topic. Examples of the questions included the following: “How much water on average does a Belgian use per year?” and “What does water footprint refer to?” Each correct answer was worth one point, so a participant could obtain a maximum of 10 points.

The posttest contained knowledge and insight questions. It consisted of 16 items: seven were multiple-choice items (e.g., “What does FAO stand for?”), three were fill-in-the-blank sentences (e.g., “In the next 50 years the demand for food will ————.”), and the last six items were true or false statements (e.g., “According to the text, changes in the weather cause a rise in the temperature.”). Participants could earn a point for every correct answer and obtain a maximum score of 16 points.

2.3. Procedure

The experiment was divided into two sessions. In the first session, all participants filled out the questionnaires about perceptions and self-efficacy. Afterwards, participants could enroll for one of the 14 appointments available for the second session. A maximum of 15 participants participated in each session.

The second session was the CBLE session. The group of participants was randomly and equally divided into one of the two experimental conditions. They entered the computer room, sat in front of a computer, were given instructions, and started the task. After the participants finished with the hypertext, they were given the performance tests.

3. Data Analysis

The data sources were the prior knowledge and performance tests, the self-efficacy, perceived functionality and perceived usability scores, and the quantity and quality of tool use measures. First, an ANOVA analysis was conducted to determine whether any differences among conditions were related to prior knowledge. An ANOVA was chosen over a -test because ANOVA controls better for differences in standard deviations between groups.

Next, path estimates were calculated by OLS regression. SPSS 18 software was used to test the hypotheses from the proposed research model and discover the mediating effects of perceived functionality and perceived usability. Regression was preferred over structural equation modeling because the sample size was significantly different from the structural equation modeling assumption [59]. First, a path analysis was conducted for the entire sample (). Then a separate path analysis for each condition (with and without explanation of tool functionality) was performed ( and 46). Although the sample size for the separate path analyses was not large, the sample size () was considered appropriate because we used OLS regression. Previous studies using regression [39, 60] have considered samples in a similar range. According to Ingram and colleagues, a sample such as the one in the present study is sufficiently large for all the zero-order and multiple correlations to be statistically significant beyond .01; Wu et al. [39] followed the same reasoning in their study. In addition, a sample size should be at least five times larger than the number of estimated paths to ensure reliable results [61]. Given that there were nine estimated paths, a minimum number of 45 participants were needed for each analysis which was a met requirement. Finally, the requirements of linearity, additivity, and low multicollinearity for path analysis [62] were also met. Therefore, in this study it was still appropriate to test the model and the observed set of correlations between variables using path analysis.

4. Results

ANOVA results suggested no differences with respect to prior knowledge (, , and partial ) between the conditions. The descriptive statistics (Table 1) illustrate that the scores in the self-efficacy, perceived usability, and perceived functionality questionnaire ranged above the middle point (3.5) with standard deviations no larger than .88. Participants could earn a maximum of 15 points for quality of tool use and participants in both conditions had similar results (, , and partial ). The maximum score for performance was 16 points and results did not differ between the two conditions (, , and partial ). However, a difference in relation to the time spent with the tool could be observed in quantity of tool use (proportional amount of time in seconds spent with the tool) (, , and partial ). Participants who did not receive an explanation of the tool’s functionality spent more time with the tool.

tab1
Table 1: Descriptive statistics for each variable in the proposed research model.

The zero-order correlations are shown in Table 2: self-efficacy was positively and significantly correlated to perceived usability and performance. As suggested by the TAM, a significant correlation between perceived functionality and perceived usability was also observed. As predicted, there was a significant correlation between perceived usability and quantity and quality of tool use. The last significant correlation was between quantity of tool use and performance.

tab2
Table 2: Correlations among variables in the proposed research model.

The proposed research model was later tested with path analysis. Table 3 summarizes the decomposition effects of the path analysis in the entire sample for each hypothesis. Figure 5 shows the path model for the entire sample. Significant paths are represented in Figure 5 by solid lines with the standardized beta coefficients which would mean that four hypotheses were confirmed. However, one of those hypotheses—represented by a gray solid line—showed a negative relationship; namely, perceived functionality had a negative influence on quantity of tool use (, ). The other three hypotheses were fully confirmed. Perceived usability had a direct effect on perceived functionality (, ). Perceived usability affected quantity of tool use; this effect was positive (, ). Quantity of tool use showed a direct positive influence on performance (, ). The paths with the dotted lines represented nonsignificant effects.

tab3
Table 3: Decomposition effects for the entire sample for each hypothesis.
736931.fig.005
Figure 5: Path analysis of the proposed research model for the entire sample. Dotted lines represent no significant relationship; solid lines represent positive significant relationships; gray lines represent negative but significant relationships.

One of the research questions and Hypothesis 6 aimed to analyze the possible moderation effects of the explanation of the tool functionality, in order to know if the same pattern of relationships could be observed in the two different conditions. Therefore the path analysis for each of the conditions is illustrated in Figures 6 and 7. The pattern of the hypothesized relationships was quite consistent regarding Hypotheses 1a and 5a. That means that the effects of perceived usability on perceived functionality and self-efficacy on performance were constant.

736931.fig.006
Figure 6: Path analysis of the proposed research model for the condition with explained tool functionality. Dotted lines represent no significant relationship; solid lines represent positive significant relationships.
736931.fig.007
Figure 7: Path analysis of the proposed research model for the condition with nonexplained tool functionality. Dotted lines represent no significant relationship; solid lines represent positive significant relationships.

However, a remarkable difference was observed in Hypothesis 2. In the path analysis with explained functionality, there was a positive significant relationship between self-efficacy and perceived usability (, ). This relationship could not be observed either in the path analysis for the whole sample or in the path analysis in the condition without explained functionality. This means that () was confirmed. Another difference was between the analysis of the entire sample and the analysis performed per condition. In the entire sample path analysis, a relationship between both perceptions and quantity of tool use was observed. This result could not be obtained (, ) in either of the separate path analyses (with and without explanation of the tool functionality).

5. Discussion

This contribution aimed to address three issues related to CBLE: first on exploring a model that explores the use of tools based on the TAM, second, on the variables that influence tool use, namely, self-efficacy and perceptions (perceived functionality and perceived usability), and, third, on how quantity and quality of tool use influence performance. Additionally, the moderating effects of advice/explanation of tool functionality over the different independent variables were examined.

The results supported and refuted some of our hypotheses. Only one part of Hypothesis 1 could be confirmed: perceived functionality was positively influenced by perceived usability (). Studies specifically related to the TAM have also found a strong relationship among perceptions () (e.g., [14, 51, 6365]). This finding suggests that our results are valid: perceived usability and perceived functionality are both related to tool use in CBLEs. Regarding Hypothesis 1b, no effect was found. This means that no relationship was found between perceived functionality and self-efficacy. However, a nearly significant effect was found between perceived functionality and self-efficacy () for the entire sample analysis. The same pattern could be retrieved for the condition with no explained functionality (, ). These results could give indications of the relationship that may exist between self-efficacy and perceived functionality in a CBLE setting when no instructional interventions such as advice or explained tool functionality are provided. At this point, it is too premature to make such an inference, though, and further research should follow.

Hypothesis 2 could not be confirmed: no relationship was found between perceived usability and self-efficacy for the entire sample. Although a significant correlation was found between perceived usability and self-efficacy (Table 2), the regression table (Table 3) shows that this relationship was marginally significant (). However, with the separate path analysis in the condition with explained functionality, this relationship was found to be significant and self-efficacy affected perceived usability positively (). This result shows that the explanation of tool functionality can affect the relationship between self-efficacy and perceived usability.

So far, the findings showing that self-efficacy was unrelated to both perceived functionality () and perceived usability () contradict previous studies [2, 32] which found a relationship between self-efficacy and perceptions. However, Pituch and Lee’s study [14] only found a positive and significant effect between self-efficacy and perceived usability (perceived ease of use). In this study the same relationship in the correlations table (Table 2) was observed, and this result was further obtained in the path analysis with the explanation of tool functionality condition. This finding sheds light on how the presence of the explanation of tool functionality can modify the effects of different variables, in this case, self-efficacy and perceived usability.

The following findings for Hypothesis 3 were obtained. Quantity of tool use seemed to be influenced by both perceived functionality () and perceived usability (). This finding builds on literature suggesting that perceptions play a significant role in the use of tools [29, 37]. It also adds to the literature on the cognitive mediational paradigm [34], which indicates that performance is mediated by students’ cognitive and metacognitive processes (in this case, perceived usability and perceived functionality). Unexpectedly, however, perceived functionality had a negative effect on tool use while perceived usability had a positive effect on tool use. Therefore, according to the definitions of perceived usability and perceived functionality, the more the learners perceive a certain tool to be usable and easy to use (perceived usability) and the less they believe that using a certain tool would enhance their performance to reach a goal (perceived functionality), the more time learners will spend on the tool(s). This finding suggests a direction on the type of perceptions (perceived functionality and perceived usability) that should be considered for further studies exploring tool use in CBLEs but also raises some questions that will be discussed as follows. The effects of both perceptions on quantity of tool use disappeared in the separate path analysis with and without explanation of the tool functionality. Hence it could be possible that the explanation of the tool functionality moderated the role of perceptions in the whole sample. It is also possible that the explanation of the tool functionality influenced learners perceptions of the tools as in previous studies [66] where advice conflicted with the perceived functionality which led to a negative effect on frequency of tool use. In this study, perceived functionality’s effect was negative while perceived usability’s effect was positive. The explanation of the tool functionality thus also had an indirect effect on the quantity of tool use, specifically, time spent on the tool. Because this result could not be obtained in the separate analysis with the condition of explained functionality, this claim cannot be fully confirmed and the direction of this effect cannot be yet determined. What is certain is that this result further validates the statement that explaining the functionality of a tool may moderate perceptions [27]. Methodologically, it is possible that the effect could not be observed due to the number of participants. A structural equation modeling analysis could have allowed an analysis to see if there is a significant difference among conditions. The presence of the explanation of the tool functionality could also be too invasive and affected the participants’ internal processes [67] (i.e., the explanation may have caused reluctance to use the tools), which in turn led them to spend less time with the tool. Finally, this result also raised two questions: first on whether this could be regarded as a mismatch between the functionality learners assign to a concept map and the explanation given and second on whether the fact that the text was not related to the curriculum affected the influence of the explanation of the tool functionality. However this conjecture requires further empirical grounds.

In relation to Hypothesis 4, quality of tool use was not found to be positively influenced by either perceived functionality () or perceived usability () in any of the analyses. While the role of perceptions on quality of tool use has been theoretically emphasized [37], empirical studies have mainly explored quantity of tool use [23]. In the same line, studies exploring the role of instructional interventions, in this case the effect of explanation of tool functionality on tool use, have mainly focused on quantity of tool use (time and frequency) (e.g., [23, 68, 69]). Although this study did not find that quality of tool use had significant effects on perceptions or it was moderated by the explanation of tool functionality, this result encourages research to explore the role of perceptions and the explanation of tool functionality in relation to quality of tool use.

One of the two hypotheses in Hypothesis 5 was confirmed. Specifically, performance was positively affected by quantity of tool use () but not by quality of tool use (). First, the relationship between quantity of tool use and performance is in line with previous research [41] and is considered strong (4.84% variance) given that we studied a specific population, namely, graduate students. Surprisingly and contrary to previous research [24, 42], quality of tool use did not have a significant influence on performance. However, the tools used in the previous studies were discussion board [24] and adjunct questions [42]; this study explored concept maps. It is possible that tool type differences affect the relationship between quality of tool use and performance, since different types of tools can provide different learning opportunities [70] and support varied purposes [71]. In addition, it is unclear whether the quality measure is sufficiently valid to measure learning processes, which may lead us to question both the construct and validity of this measure. According to Raphael and Pearson [72], the quality of tool use (answering questions or, in this case, completing the concept maps) is strongly influenced by reading ability and text complexity. Hence, it is possible that in this study either the ability of students to read and/or the text complexity tainted the potential effect that quality of tool use could have on performance. It is also possible—based on these findings—that the indirect positive effect of perceived usability and the indirect negative effect of perceived functionality on performance—through quantity of tool use—could take place only under certain conditions: the presence of the explanation of the tool functionality.

The results obtained from Hypotheses 4 and 5 could also be explained from a methodological perspective. This means that the quantity and quality of tool use might be influenced by the cognitive processes the learners had to go through in dealing with the tools and the learning task or simply in trying to figure out how to use the tool (cognitive load theory: [73]).

Finally, in the correlations table (Table 2) a significant correlation was found between self-efficacy and performance. Although this was not part of the research framework, studies conducted outside CBLEs have found that self-efficacy seems to be a strong predictor of performance [38, 74]. For instance, van Dinther and colleagues [75] have described self-efficacy as vital to academic performance; more importantly, a recent meta-analysis [76] studying the relationships between self-efficacy and transfer (performance) found that the effect(s) of self-efficacy on performance is strengthened within CBLEs. Therefore, in further research, it is well worth exploring the direct effects that self-efficacy may have on performance in a CBLE context.

6. Limitations and Further Research

Some limitations from the present paper should be noted. One of them is sample size. Given the relatively small sample size, no further learner characteristics could be evaluated within our model. Research exploring tool use in CBLEs has also explored learners’ metacognitive characteristics such as self-regulation and other motivational characteristics aside from self-efficacy, such as goal orientation. Hence, a larger sample size could have allowed more complex analyses such as structural equation modeling analysis and the implementation of more variables in our research model for tool use in CBLEs.

Another limitation is in line with the design of the study. Given that it was merely experimental, it did not show how learners use tools in the “real” world and learners might have possibly used all the tools as they felt intrinsically “forced.” Although a clearer perspective on the actual tool use was provided with the present results, further studies would benefit from research in this matter in ecological settings or by employing mixed methods research approaches.

7. Conclusion

Overall this paper aimed to establish perceptions for tool use in CBLEs based on the TAM and considering the cognitive mediational paradigm. Moreover, this paper explored the moderating role of the explanation of tool functionality in CBLEs and the effects of tool use on performance. These results add to the literature about tool use in CBLEs. Even though our results showed that self-efficacy had no significant effects on perceptions, they indicated that self-efficacy might have an effect on perceived usability. This paper also established that perceived usability and perceived functionality are related, just as perceived usefulness and perceived ease of use are related in the TAM. It also emphasizes the effect of perceptions on tool use, in this case perceived usability. Therefore, perceived functionality and perceived usability seem to be adequate constructs in contexts specifically related to tool use in CBLEs. Finally, this study brings to light the positive effects tool usage can have on performance. This result not only supports theoretical and empirical claims from general learning environments that emphasize that learners should have sufficient practice and spend enough time—in learning tasks—in order to obtain more learning gains (e.g., [77]) but also furthers this scope to CBLEs.

Conflict of Interests

The authors declare that there is no conflict of interests regarding the publication of this paper.

Acknowledgments

The authors express gratitude to the Fonds Wetenschappelijk Onderzoek-Vlaanderen (FWO) Grant G.0408.09 that provided the funds for this research. They also would like to thank Griet Lust for her help in the log files.

References

  1. M. D. Bush and J. D. Mott, “The transformation of learning with technology: learner-centricity, content and tool malleability, and network effects,” Educational Technology Magazine, pp. 3–19, 2009. View at Google Scholar
  2. F. D. Davis, “Perceived usefulness, perceived ease of use, and user acceptance of information technology,” MIS Quarterly, vol. 13, no. 3, pp. 319–339, 1989. View at Google Scholar · View at Scopus
  3. F. D. Davis, R. P. Bagozzi, and P. R. Warshaw, “User acceptance of computer technology: a comparison of two theoretical models,” Management Science, vol. 35, no. 8, pp. 982–1003, 1989. View at Google Scholar
  4. Y. Li, J. Qi, and H. Shu, “Review of relationships among variables in TAM,” Tsinghua Science and Technology, vol. 13, no. 3, pp. 273–278, 2008. View at Publisher · View at Google Scholar · View at Scopus
  5. P. Legris, J. Ingham, and P. Collerette, “Why do people use information technology? A critical review of the technology acceptance model,” Information and Management, vol. 40, no. 3, pp. 191–204, 2003. View at Publisher · View at Google Scholar · View at Scopus
  6. D. N. Perkins, “The fingertip effect: how information-processing technology shapes thinking,” Educational Researcher, vol. 14, no. 7, pp. 11–17, 1985. View at Google Scholar
  7. V. Aleven, E. Stahl, S. Schworm, F. Fischer, and R. Wallace, “Help seeking and help design in interactive learning environments,” Review of Educational Research, vol. 73, no. 3, pp. 277–320, 2003. View at Google Scholar · View at Scopus
  8. G. Clarebout and J. Elen, “Tool use in computer-based learning environments: towards a research framework,” Computers in Human Behavior, vol. 22, no. 3, pp. 389–411, 2006. View at Publisher · View at Google Scholar · View at Scopus
  9. V. Venkatesh and F. D. Davis, “Theoretical extension of the technology acceptance model: four longitudinal field studies,” Management Science, vol. 46, no. 2, pp. 186–204, 2000. View at Google Scholar · View at Scopus
  10. V. Venkatesh, M. G. Morris, G. B. Davis, and F. D. Davis, “User acceptance of information technology: toward a unified view,” MIS Quarterly, vol. 27, no. 3, pp. 425–478, 2003. View at Google Scholar · View at Scopus
  11. V. Venkatesh and H. Bala, “Technology acceptance model 3 and a research agenda on interventions,” Decision Sciences, vol. 39, no. 2, pp. 273–315, 2008. View at Publisher · View at Google Scholar · View at Scopus
  12. J. Bourgonjon, M. Valcke, R. Soetaert, and T. Schellens, “Students' perceptions about the use of video games in the classroom,” Computers and Education, vol. 54, no. 4, pp. 1145–1156, 2010. View at Publisher · View at Google Scholar · View at Scopus
  13. I. Ajzen and M. Fishbein, Understanding Attitudes and Predicting Social Behavior, Prentice-Hall, Englewood Cliffs, NJ, USA, 1980.
  14. K. A. Pituch and Y. K. Lee, “The influence of system characteristics on e-learning use,” Computers and Education, vol. 47, no. 2, pp. 222–244, 2006. View at Publisher · View at Google Scholar · View at Scopus
  15. K. Mathieson, “Predicting user intentions: comparing the technology acceptance model with the theory of planned behavior,” Information Systems Research, vol. 2, no. 3, pp. 173–191, 1991. View at Google Scholar · View at Scopus
  16. P. R. Warshaw and F. D. Davis, “Disentangling behavioral intention and behavioral expectation,” Journal of Experimental Social Psychology, vol. 21, no. 3, pp. 213–228, 1985. View at Google Scholar · View at Scopus
  17. M. Bannert, M. Hildebrand, and C. Mengelkamp, “Effects of a metacognitive support device in learning environments,” Computers in Human Behavior, vol. 25, no. 4, pp. 829–835, 2009. View at Publisher · View at Google Scholar · View at Scopus
  18. J. Zumbach, “The role of graphical and text based argumentation tools in hypermedia learning,” Computers in Human Behavior, vol. 25, no. 4, pp. 811–817, 2009. View at Publisher · View at Google Scholar · View at Scopus
  19. R. Azevedo, “Using hypermedia as a metacognitive tool for enhancing student learning? The role of self-regulated learning,” Educational Psychologist, vol. 40, no. 4, pp. 199–209, 2005. View at Publisher · View at Google Scholar · View at Scopus
  20. M. Liu and S. Bera, “An analysis of cognitive tool use patterns in a hypermedia learning environment,” Educational Technology Research and Development, vol. 53, no. 1, pp. 5–21, 2005. View at Google Scholar · View at Scopus
  21. T. Iiyoshi, M. J. Hannafin, and F. Wang, “Cognitive tools and student-centred learning: rethinking tools, functions and applications,” Educational Media International, vol. 42, no. 4, pp. 281–296, 2005. View at Google Scholar
  22. T. Iiyoshi and M. J. Hannafin, “Cognitive tools for open-ended learning environments: theoretical and implementation perspectives,” in Proceedings of the Annual Meeting of the American Educational Research Association, San Diego, Calif, USA, 1998.
  23. G. Clarebout and J. Elen, “The complexity of tool use in computer-based learning environments,” Instructional Science, vol. 37, no. 5, pp. 475–486, 2009. View at Publisher · View at Google Scholar · View at Scopus
  24. B. D. Pulford, “The influence of advice in a virtual learning environment,” British Journal of Educational Technology, vol. 42, no. 1, pp. 31–39, 2011. View at Publisher · View at Google Scholar · View at Scopus
  25. C. Gräsel, F. Fischer, and H. Mandl, “The use of additional information in problem-oriented learning environments,” Learning Environments Research, vol. 3, no. 3, pp. 287–305, 2000. View at Google Scholar
  26. N. A. Juarez Collazo, J. Elen, and G. Clarebout, “To use or not to use tools in interactive learning environments: a question of self-efficacy?” The Literacy Information and Computer Education Journal, vol. 1, no. 1, pp. 810–817, 2012. View at Google Scholar
  27. P. H. Winne, “Steps toward promoting cognitive achievements,” The Elementary School Journal, vol. 85, no. 5, pp. 673–693, 1985. View at Google Scholar
  28. H. van der Meij, J. van der Meij, and R. Harmsen, “Animated pedagogical agents: do they advance student motivation and learning in an inquiry learning environment?” Tech. Rep. TR-CTIT-12-02, Centre for Telematics and Information Technology, University of Twente, Enschede, The Netherlands, 2012. View at Google Scholar
  29. P. H. Gerjets and F. W. Hesse, “When are powerful learning environments effective? The role of learner activities and of students' conceptions of educational technology,” International Journal of Educational Research, vol. 41, no. 6, pp. 445–465, 2004. View at Publisher · View at Google Scholar · View at Scopus
  30. A. Bandura, Self-Efficacy: The Exercise of Control, W.H. Freeman and Company, New York, NY, USA, 1997.
  31. P. K. Murphy and P. A. Alexander, “A motivated exploration of motivation terminology,” Contemporary Educational Psychology, vol. 25, no. 1, pp. 3–53, 2000. View at Publisher · View at Google Scholar · View at Scopus
  32. K. S. Hong, J. L. A. Cheng, and T. L. Liau, “Effects of system's and user's characteristics on e-learning use: a study at Universiti Malaysia Sarawak,” Journal of Science and Mathematics Education in Southeast Asia, vol. 28, no. 2, pp. 1–25, 2005. View at Google Scholar
  33. L. Luyten, J. Lowyck, and F. Tuerlinckx, “Task perception as a mediating variable: a contribution to the validation of instructional knowledge,” British Journal of Educational Psychology, vol. 71, no. 2, pp. 203–223, 2001. View at Google Scholar · View at Scopus
  34. P. H. Winne and R. W. Marx, Students' Cognitive Processes While Learning From Teaching. Final Report, Instructional Psychology Research Group, Faculty of Education, Simon Fraser University, British Columbia, Canada, 1983.
  35. P. H. Winne, “Why process-product research cannot explain process-product findings and a proposed remedy: the cognitive mediational paradigm,” Teaching and Teacher Education, vol. 3, no. 4, pp. 333–356, 1987. View at Google Scholar · View at Scopus
  36. P. H. Winne, “Minimizing the black box problem to enhance the validity of theories about instructional effects,” Instructional Science, vol. 11, no. 1, pp. 13–28, 1982. View at Publisher · View at Google Scholar · View at Scopus
  37. J. Lowyck, J. Elen, and G. Clarebout, “Instructional conceptions: analysis from an instructional design perspective,” International Journal of Educational Research, vol. 41, no. 6, pp. 429–444, 2004. View at Publisher · View at Google Scholar · View at Scopus
  38. F. Pajares, “Self-efficacy in academic settings,” in Paper Presented at the Annual Meeting of the American Educational Research Association, San Francisco, Calif, USA, 1995. View at Google Scholar
  39. X. Wu, J. Lowyck, L. Sercu, and J. Elen, “Task complexity, student perceptions of vocabulary learning in EFL, and task performance,” British Journal of Educational Psychology, vol. 83, no. 1, pp. 160–181, 2013. View at Publisher · View at Google Scholar
  40. E. A. Linnenbrink and P. R. Pintrich, “Multiple goals, multiple contexts: the dynamic interplay between personal goals and contextual goal stresses,” in Motivation in Learning Contexts:Theoretical and Methodological Implications, S. E. Volet and S. Järvelä, Eds., Elsevier, Amsterdam, The Netherlands, 2001. View at Google Scholar
  41. R. Viau and J. Larivée, “Learning tools with hypertext: an experiment,” Computers and Education, vol. 20, no. 1, pp. 11–16, 1993. View at Google Scholar · View at Scopus
  42. L. Jiang and J. Elen, “Instructional effectiveness of higher-order questions: the devil is in the detail of students' use of questions,” Learning Environments Research, vol. 14, no. 3, pp. 279–298, 2011. View at Publisher · View at Google Scholar · View at Scopus
  43. N. A. Juarez Collazo, J. Elen, and G. Clarebout, “Perceptions for tool use: in search of a tool use model,” in Proceedings of the World Conference on Educational Multimedia, Hypermedia and Telecommunications, pp. 2905–2912, 2012.
  44. N. C. Goodwin, “Functionality and usability,” Communications of the ACM, vol. 30, no. 3, pp. 229–233, 1987. View at Publisher · View at Google Scholar · View at Scopus
  45. M. Igbaria, S. Parasuraman, and J. J. Baroudi, “A motivational model of microcomputer usage,” Journal of Management Information Systems, vol. 13, no. 1, pp. 127–143, 1996. View at Google Scholar · View at Scopus
  46. T. Teo, “Is there an attitude problem? Reconsidering the role of attitude in the TAM,” British Journal of Educational Technology, vol. 40, no. 6, pp. 1139–1141, 2009. View at Publisher · View at Google Scholar · View at Scopus
  47. N. Nistor and J. O. Heymann, “Reconsidering the role of attitude in the TAM: an answer to Teo (2009a),” British Journal of Educational Technology, vol. 41, no. 6, pp. E142–E145, 2010. View at Publisher · View at Google Scholar · View at Scopus
  48. Ö.F. Ursavaş, “Reconsidering the role of attitude in the TAM: an answer to Teo (2009) and Nistor and Heymann (2010), and Lopez-Bonilla and Lopez-Bonilla (2011),” British Journal of Educational Technology, vol. 44, no. 1, pp. E22–E25, 2013. View at Google Scholar
  49. S. S. Al-Gahtani and M. King, “Attitudes, satisfaction and usage: factors contributing to each in the acceptance of information technology,” Behaviour and Information Technology, vol. 18, no. 4, pp. 277–297, 1999. View at Publisher · View at Google Scholar · View at Scopus
  50. F. D. Davis, R. P. Bagozzi, and P. R. Warshaw, “Extrinsic and intrinsic motivation to use computers in the workplace,” Journal of Applied Social Psychology, vol. 22, no. 14, pp. 1111–1132, 1992. View at Google Scholar
  51. V. Cho, T. C. E. Cheng, and W. M. J. Lai, “The role of perceived user-interface design in continued usage intention of self-paced e-learning tools,” Computers and Education, vol. 53, no. 2, pp. 216–227, 2009. View at Publisher · View at Google Scholar · View at Scopus
  52. D. Raes, S. Geerts, and E. Vanuytrecht, “Waarom water broodnodig is,” Bio-Ingenieus, vol. 12, no. 5, pp. 2–4, 2009. View at Google Scholar
  53. J. Novak and A. Cañas, “The theory underlying concept maps and how to construct and use them,” Tech. Rep. 2006-01 Rev 01-2008, Florida Institute for Human and Machine Cognition (IHMC), Pensacola, Fla, USA, 2008, http://cmap.ihmc.us/Publications/ResearchPapers/TheoryUnderlyingConceptMapsHQ.pdf.
  54. J. Novak and D. B. Gowin, Learning How to Learn, Cambridge University Press, New York, NY, USA, 1984.
  55. M. A. Ruiz-Primo, “Examining concept maps as an assessment tool,” in Proceedings of the 1st International Conference on Concept Mapping, Pamplona, Spain, 2004.
  56. P. R. Pintrich, D. Smith, T. Garcia, and W. McKeachie, A Manual for the Use of the Motivated Strategies for Learning Questionnaire (MSLQ), National Center for Research to Improve Postsecondary Teaching and Learning (NCRIPTAL), University of Michigan, Ann Arbor, Mich, USA, 1991.
  57. K. R. Lodewyk and P. H. Winne, “Relations among the structure of learning tasks, achievement, and changes in self-efficacy in secondary students,” Journal of Educational Psychology, vol. 97, no. 1, pp. 3–12, 2005. View at Publisher · View at Google Scholar · View at Scopus
  58. O. Behling and K. S. Law, Translating Questionnaires and Other Research Instruments: Problems and Solutions, Sage Publications, Thousand Oaks, Calif, USA, 2000.
  59. J. Jaccard and C. K. Wan, LISREL Approaches to Interaction Effects in Multiple Regression, Sage Publications, Thousand Oaks, Calif, USA, 1996.
  60. K. L. Ingram, J. G. Cope, B. L. Harju, and K. L. Wuensch, “Applying to graduate school: a test of the theory of planned behavior,” Journal of Social Behavior and Personality, vol. 15, no. 2, pp. 215–226, 2000. View at Google Scholar · View at Scopus
  61. P. S. Petraitis, A. E. Dunham, and P. H. Niewiarowski, “Inferring multiple causality: the limitations of path analysis,” Functional Ecology, vol. 10, no. 4, pp. 421–431, 1996. View at Google Scholar · View at Scopus
  62. A. Field, Discovering Statistics Using SPSS, SAGE, London, UK, 3rd edition, 2009.
  63. S. H. Lau and P. C. Woods, “An investigation of user perceptions and attitudes towards learning objects,” British Journal of Educational Technology, vol. 39, no. 4, pp. 685–699, 2008. View at Publisher · View at Google Scholar · View at Scopus
  64. T. Teo, “Examining the intention to use technology among pre-service teachers: an integration of the technology acceptance model and theory of planned behavior,” Interactive Learning Environments, vol. 20, no. 1, pp. 3–18, 2012. View at Publisher · View at Google Scholar · View at Scopus
  65. Y. Gao, “Applying the Technology Acceptance Model (TAM) to educational hypermedia: a field study,” Journal of Educational Multimedia and Hypermedia, vol. 14, no. 3, pp. 237–247, 2005. View at Google Scholar
  66. G. Clarebout and J. Elen, “Tool use in open learning environments: in search of learner-related determinants,” Learning Environments Research, vol. 11, no. 2, pp. 163–178, 2008. View at Publisher · View at Google Scholar · View at Scopus
  67. M. D. Merrill, “Learner control in computer based learning,” Computers and Education, vol. 4, no. 2, pp. 77–95, 1980. View at Google Scholar · View at Scopus
  68. Y. B. Lee and J. D. Lehman, “Instructional cuing in hypermedia: a study with active and passive learners,” Journal of Educational Multimedia and Hypermedia, vol. 2, no. 1, pp. 25–37, 1993. View at Google Scholar
  69. C. A. Carrier, G. V. Davidson, M. D. Williams, and C. M. Kalweit, “Instructional options and encouragement effects in a microcomputer-delivered concept lesson,” The Journal of Educational Research, vol. 79, no. 4, pp. 222–229, 1986. View at Google Scholar
  70. G. Lust, N. A. Juarez Collazo, J. Elen, and G. Clarebout, “Content management systems: enriched learning opportunities for all?” Computers in Human Behavior, vol. 28, no. 3, pp. 795–808, 2012. View at Publisher · View at Google Scholar · View at Scopus
  71. M. Hannafin, S. Land, and K. Oliver, “Open learning environments: foundations, methods and models,” in Instructional Design, Theories and Models, C. M. Reigeluth, Ed., pp. 115–140, Lawrence Erlbaum, Mahwah, NJ, USA, 1999. View at Google Scholar
  72. T. E. Raphael and P. D. Pearson, “Increasing students' awareness of sources of information for answering questions,” The American Educational Research Journal, vol. 22, no. 2, pp. 217–235, 1985. View at Google Scholar
  73. J. Sweller, J. G. van Merrienboer, and F. G. W. C. Paas, “Cognitive architecture and instructional design,” Educational Psychology Review, vol. 10, no. 3, pp. 251–296, 1998. View at Google Scholar · View at Scopus
  74. M. E. Gist, C. K. Stevens, and A. G. Bavetta, “Effects of self-efficacy and post-training intervention on the acquisition and maintenance of complex interpersonal skills,” Personnel Psychology, vol. 44, no. 4, pp. 837–861, 1991. View at Google Scholar
  75. M. van Dinther, F. Dochy, and M. Segers, “Factors affecting students' self-efficacy in higher education,” Educational Research Review, vol. 6, no. 2, pp. 95–108, 2011. View at Publisher · View at Google Scholar · View at Scopus
  76. A. Gegenfurtner, K. Veermans, and M. Vauras, “Effects of computer support, collaboration, and time lag on performance self-efficacy and transfer of training: a longitudinal meta-analysis,” Educational Research Review, vol. 8, pp. 75–89, 2012. View at Publisher · View at Google Scholar · View at Scopus
  77. M. Romero and E. Barberà, “Quality of e-learners' time and learning performance beyond quantitative time-on-task,” International Review of Research in Open and Distance Learning, vol. 12, no. 5, pp. 125–137, 2011. View at Google Scholar · View at Scopus