Abstract

Objectives. To design a series of e-learning tools within the framework of a defined educational pedagogy to complement the conventional pharmacology curriculum at Griffith University and evaluate the impact of this strategy on student level of understanding through taxonomic classification of student final exam answers. Methods. A series of 148 e-learning tools was designed for 3rd year undergraduate pharmacy students and incorporated into their curriculum during 2012. The educational benefits of the e-learning tools were evaluated by analyses of student level of understanding (by SOLO taxonomy) at the final exams between the control group (standard curricula) in 2011 and the intervention group (standard curricula + e-learning tools) in 2012. Results. Backward linear regression analysis demonstrated GPA to be the most significant predictor of level of understanding, while the intervention group was a highly significant predictor for greater level of understanding in semester two. Conclusion. E-learning tools appeared to significantly improve student level of understanding as scored by the SOLO taxonomy when students engaged highly with the tools.

1. Introduction

The scholarship of learning and teaching (SoLT) involves research into practices of teaching, learning, and curriculum. SoLT’s main principle is that effective teachers in higher education should engage in scholarly teaching practices as a matter of course, by staying in touch with the latest research developments in their discipline, integrating these developments into their curriculum, and routinely gathering and using student feedback to guide curriculum review and improvement. SoLT research focuses on understanding student learning in order to improve the teaching and learning experience for participants [13]. One area in which SoLT principles are particularly important is pharmacology education, because it entails rich content involving many drugs and drug mechanisms of action, numerous detailed facts about drug classes and individual compounds, and even the diseases for which the various drugs are used [4]. Moreover, students perceive pharmacology as a more “difficult” learning area than other subjects in the undergraduate curriculum [5]. Consequently, teaching pharmacology curricula to students has been a challenge [6, 7] and up-to-date teaching methods, such as e-learning tools, have been proposed to keep the students engaged in the content [4]. E-learning tools have been shown to assist academics and educators to meet the growing needs and expectations for improving the quality of pharmacology education [810].

Yet, while e-learning tools offer a number of inherent features such as flexibility in place and time for learning, adaptability to diverse learning styles and paces of the students, and scalability to rising student numbers, their use remains limited [11]. This may be due to miscommunication between e-learning tool developers and the educators who make decisions about their use, economic factors such as high costs and time requirements for the development of e-learning tool content, the paucity of knowledge regarding how to effectively integrate e-learning tools into higher education curricula, and, perhaps most importantly, a lack of consensus in the scholarly literature on e-learning tool effectiveness [1214].

This debate was ignited by Richard Clark’s article in 1983, where he concluded that technology makes no more contribution to learning than delivery, whereas the instructional design (how the technology is used) is responsible for any achievement gains [1517]. He went further by stating that as long as instructional methods promote appropriate cognitive processing during learning, then media do not seem to matter. However, technology has developed exponentially since Clark’s findings, and after more than 10 years, Robert Kozma contradicted Clark’s thoughts by stating that media and instructional methods have an integral relationship and that the technology is most beneficial for learning when students are engaged within the curriculum [15, 16, 18]. Recently, research has shown that active learning can be supported by the employment of e-learning tools and when delivered appropriately, the use of e-tools can scaffold student learning, supporting students’ construction of their own ideas and understanding of the content being presented [19]. E-learning can be administered as a stand-alone tool or can be adapted to supplement traditional teaching methods, including blended learning [20], which has been shown to enhance student preference and satisfaction [21]. Thus, research to evaluate the benefit of modern e-learning tools on student performance (in terms of their level of understanding achieved during assessment) is important to support the use of technology and blended learning in university students.

Assessment of student level of understanding is possible using taxonomic classification of exam answers; however, to the best of our knowledge, no study has employed this approach to evaluate the effect of e-learning tools within higher education sector thus far. Therefore, it would appear that no higher education study has followed a holistic approach to evaluate the impact of technology on student learning, which might have contributed to the overly cautious implementation of e-learning tools in higher education settings [12]. Hence, there is a need for further scholarly research to overcome these challenges and maximise the potential of e-learning tools.

Our team has aimed to address some of these issues by designing a series of e-learning tools focusing on drug mechanisms of action, to complement the delivery of pharmacology content within the framework of a defined pedagogy and to evaluate the impact of our e-learning tools on student level of understanding through taxonomic classification of student final exam answers. We chose to analyse student level of understanding in a summative assessment that contributed significantly to the course grade. The Human Pharmacology I and II final exams contained multiple choice questions (MCQs), short answer questions (SAQs), and long answer questions (LAQs). MCQs were not assessed in the study as they predominantly covered factual information, including drug doses, classifications of drugs, and definitions, which was not related to drug mechanisms of action (the focus of the e-learning tools). Only SAQs and LAQs were therefore assessed; however, because the examination questions were not identical in each course, Bloom’s revised taxonomy was used to classify the SAQs and LAQs according to appropriate knowledge and cognitive dimensions and to ensure that the assessments were of comparable standard. Bloom’s revised taxonomy has been used previously by educators to ensure that nonidentical activities and assessments nevertheless align with similar levels of thinking skills, that is, “remembering” versus “evaluating” [22, 23]. This allows dissimilar exam questions classified at the same level of Bloom’s taxonomy to be compared across years.

Further, we aimed to assess student level of understanding in each question, instead of quantifying their performance by exam scores. The structure of the observed learning outcome (SOLO) taxonomy was therefore chosen as an evaluation rubric to qualitatively analyse student level of understanding for the short and long answer exam questions. SOLO provides a consistent framework through which to evaluate student responses and has been widely used in educational research as a means of determining the complexity and depth of student learning outcomes [24]. SOLO is a hierarchical model that is suitable for measuring learning outcomes of different kinds of subjects, among different levels of students and for all lengths of questions [24]. Several researchers who have applied SOLO into their studies value both the comprehensiveness and the objectivity of the criteria provided for measuring students’ cognitive attainment and the degree of deep learning that has occurred throughout a course [2527].

This study also intended to evaluate the educational benefits of in-house designed e-learning tools that were embedded as supplements to the standard pharmacology curricula. The e-learning tools were implemented during semesters one and two of 2012 and student performance in terms of level of understanding (scored using SOLO taxonomy) was compared to the previous academic year (2011) where students received their pharmacology content solely through the standard curriculum. Our overarching goal of this research is to apply SoLT principles to improve the pharmacology courses we teach, and the study has the potential to achieve this by providing a framework for standardising the evaluation of student performance, determining student level of understanding, and improving students’ learning experiences.

2. Methodology

This study was conducted at the School of Pharmacy, Griffith University, Gold Coast campus, Australia. A suite of 83 e-learning tools (first set) was designed for the third year Human Pharmacology I course in semester one, 2012, and 65 e-learning tools (second set) for the Human Pharmacology II course in semester two, 2012. These are both 13-week courses normally delivered by means of three hours of didactic teaching per week and weekly tutorials and laboratories totalling 2 to 4 hours. The same teaching team was involved in teaching both 2011 and 2012 cohorts. The e-learning tools covered the mechanisms of action for the majority of drug classes in the 3rd year pharmacology curriculum and supplemented the usual delivery of this content.

To evaluate the educational benefits of the e-learning tools, our team conducted a comparative study that consisted of two academic cohorts as well as two phases. The two academic cohorts were as per the following: third year pharmacy students who studied the standard Human Pharmacology I and II curricula in 2011 (control group) and those who studied the standard curricula and in addition received supplemented e-learning tools in 2012 (intervention group). The first phase of the study was to invite students, from both groups, to participate in a survey while the second phase was to evaluate and compare student level of understanding (based on SOLO taxonomy) during the final exams between the two groups. Ethical approval was granted by the Griffith University Human Ethics Committee (protocol PHM/05/10/HREC).

2.1. Survey Design and Pilot Testing

To evaluate baseline student attributes in semester one, a paper-based survey was designed to obtain demographic data including gender, the grade point average (GPA), frequency of attending lectures, and difficulty of understanding topics that cover drug mechanisms of action.

2.2. E-Learning Tool Design and Implementation

Custom animations were sequenced in Microsoft PowerPoint 2010 and iSpring Pro 6.1.0 (iSpring Solutions, Inc., USA) was used to add narration, produce the embedded animation, and convert the animations into a Flash format (.swf file) for ease of delivery through Blackboard. The e-learning tools were designed to explain concepts related specifically to drug mechanisms of action. Participants could easily control the speed of the final e-learning tools, skip content, and move forward and backward as needed to revisit specific concepts. The first and second sets of e-learning tools were made available to students who enrolled in the Human Pharmacology I and II courses in 2012 via the course websites in Griffith University’s Blackboard interface. Students were informed about the e-tools in the first lecture of the course, and the e-learning tools were available to students the following week. Thus, students had access to e-learning tools well before the first major assessment item (mid-semester exam).

These e-learning tools were designed and developed incorporating established educational theories. For example, cognitive load theory and Mayer’s dual channel assumption [28] state that students learn better from a combination of words and pictures presented simultaneously when extraneous words, pictures, and sounds are excluded. Students also learn better when multiple sources of information are integrated, when animation and narration are combined, and when students can interact with learning materials. These principles have been incorporated into the design of our e-learning tools. However, a major advantage of custom-designed e-learning tools is that the content and delivery are structured and moulded to the specific requirements of our pharmacology curriculum and learning and teaching needs. Another advantage of custom-designed e-learning tools is that educators can easily and economically update the content to encompass evolving course learning objectives, changed practices, and new developments in drug discovery and applications. This overcomes a serious limitation of commercially available tools; the commercial tools developed by trained programmers using complex software packages are often too generic or prescriptive in our specific learning and teaching contexts. Further detail on e-tool design and implementation can be found in a prior publication [29].

2.3. Student Recruitment
2.3.1. The Control Group (2011)

The course convenor approached students who enrolled in the Human Pharmacology I course (semester one, 2011) to explain the study aims and objectives. The students were then invited to participate in the first phase of the study and undertake the survey. Students who expressed interest to continue to the second phase of the study were instructed to tick a designated box that appeared on their exam paper. This box indicated their consent for the research team to evaluate their exam answer booklets for both Human Pharmacology courses I and II in 2011. The exam booklets were deidentified and coded to keep student participation anonymous. In each phase, students were advised that their participation was completely voluntary and would not affect their academic standing or course grades.

2.3.2. The Intervention Group (2012)

The course convenor approached students who enrolled in the Human Pharmacology I course (semester one, 2012) in the introductory lecture to explain the study aims and objectives. The students were also informed about the e-learning tools and the method to access them through the Blackboard.

The students were then invited to participate in the first phase of the study and undertake the survey to obtain their demographic data. As in the control group, students who expressed interest to continue in the second phase of the study were instructed to tick the designated box that appeared on their exam paper, which indicated their consent for the research team to evaluate their exam answer booklets for both Human Pharmacology courses I and II in 2012. The exam booklets were deidentified and coded to keep student participation anonymous. As in the control group, students were reminded in each phase that their participation was completely voluntary and would not affect their academic standings. The students received multiple reminders about the availability of the e-learning tools through emails and announcements in semester two to improve engagement with the tools.

2.4. Demographic Data

Demographic data were obtained from participants through two resources. Students who participated in phase one of the study self-reported their demographic information via the survey. Demographic data were also consensually obtained from university records for the students who chose to participate in the second phase of the study.

2.5. Exam Questions Classification and Scoring Procedure

To evaluate the educational benefit of the e-learning tools, student level of understanding in the final exams was evaluated using the SOLO taxonomy and compared between the control and the intervention groups. As the e-learning tools were designed to explain drug mechanisms of action, we only evaluated the questions that concerned drug mechanisms of action. A reference question which covered drug mechanisms of action but for which no e-learning tool was designed was also evaluated as a negative control. To compare the short and long answer questions between the two groups in semester one and semester two final exams, our team used Bloom’s revised taxonomy to classify the questions according to the appropriate knowledge and cognitive dimensions [30]. Then we grouped the questions that examined the same level of knowledge and cognitive dimensions to ensure valid comparisons between different exam questions. Bloom’s revised taxonomy can be used to classify the questions in categories according to what they examine. This can be the knowledge dimension (four levels): factual, conceptual, procedural, and metacognitive knowledge, and the cognitive dimension (six levels): remember, understand, apply, analyse, evaluate, and create [30]. However, the highest levels of the taxonomy, namely, metacognitive knowledge, evaluate, and create, are not usually examined within the undergraduate level [31].

To evaluate student level of understanding in the short and long answer questions, SOLO taxonomy was used to classify each student’s exam responses. This taxonomy consists of five levels of increasing structural complexity: prestructural (students report unorganized and unstructured pieces of information), unistructural (students can use terminology, recite information, and identify names), multistructural (students are able to describe, classify, combine, and apply methods), relational (students understand relations between several aspects and how they might fit together to form a whole), and extended abstract (students may generalize structure beyond what was given, may perceive structure from many different perspectives, and transfer ideas to new areas) [32]. SOLO taxonomy has been used successfully by other researchers to measure cognitive learning outcomes and qualitatively evaluate student performance in different courses among different levels of students [25, 3234]. Description of the scoring system is available in Table 1.

This process was pilot tested by our team [35] and a validation process was followed to ensure consistency in evaluating student responses. Student answers were checked against the SOLO taxonomy criteria by the main investigator and two senior pharmacology lecturers with postgraduate educational qualifications. A meeting was set up to reach a consensus for student answers that were given inconsistent SOLO levels between the markers.

2.6. Data Analysis

To evaluate the survey results, a number of quantitative analyses were undertaken. Demographic data including gender, GPA, and English as first language were compared between the students from the control and intervention groups using -tests and chi-squared tests. Student level of understanding in short and long answer questions was scored according to SOLO taxonomy and SOLO scores were compared between the two groups using -tests. Backward linear regression analysis was performed to model student level of understanding, using the demographic data variables (age, gender, GPA, domestic/international, and group) and control/intervention group. The effect of e-learning tools usage on student level of understanding for the intervention group was assessed by correlation analysis. Power analysis using Russ Lenth’ power applet showed that we had at least 80% power to detect one standard deviation difference in the means for all -test analyses. However, we had only 76% power to detect a difference of 15% in proportions between groups for the chi-squared analyses for student preference [36]. All statistical analyses were performed using IBM SPSS software (v 20). Probability () values of less than 0.05 were considered statistically significant.

3. Results

A total of 118 students were enrolled in Human Pharmacology I course in the year 2011 compared to 82 in 2012. Fifty-five (47%) students participated in the survey from the year 2011 compared to 43 (53%) from the 2012 cohort. There was no significant difference between the two groups in the demographic data (Table 2; ). Students were also asked to indicate their studying habits for the Human Pharmacology courses (Table 2). No significant difference was seen in the number of students who read through the lecture notes before attending lectures (). The level of difficulty in understanding course content responses was divided between easy, neutral, or difficult, with no significant difference between the groups. Finally, participants were asked to indicate their attendance behaviour at Human Pharmacology lectures. There was no significant difference between the groups; only a small percentage (12%) of students rarely attended lectures with the majority (88%) either frequently or always attending.

A total of 78 students consented to participate in the second phase of the study, with 53 (45%) students from the control cohort (2011) and 25 (31%) from the intervention cohort (2012). Study participant numbers remained relatively stable across semesters; only one student from the control group and two students from the intervention group failed the Human Pharmacology I course and were not able to proceed to study the Human Pharmacology II course in second semester, reducing the study numbers to in the control cohort (2011) and in the intervention cohort (2012). The demographic data of those participants were obtained from the university records to ensure accuracy (Table 3). Statistical analysis for demographic data showed no significant difference between the two groups in any of the comparisons. However, the difference in the gender variable approached significance (), as more females participated in the control group.

Student level of understanding for the semester one exam (Human Pharmacology I) was scored according to SOLO taxonomy and compared between the e-learning tool and control groups. Table 4 shows SOLO scoring for both overall performance and individual questions classified by Bloom’s revised taxonomy. Students from the intervention group significantly outperformed their peers from the control group in question one, which examined the factual and procedural knowledge domain in addition to recalling and understanding from the cognitive domain. One question was repeated in both years’ exams (digoxin) and students from the intervention group outperformed the students from the control group; however, the difference was not significant (). Additionally, there was no significant difference between the control and intervention groups when answering the reference exam question (no e-learning tool was designed to cover this question).

Table 5 shows SOLO scoring of student level of understanding for the semester two exam (Human Pharmacology II), again for both overall performance and individual questions classified by Bloom’s revised taxonomy. Students from the intervention group performed better than the control group when comparing the overall level of understanding; however, the difference was not significant (). Moreover, four questions on specific drugs (on Cytarabine, Mitomycin C, Trastuzumab, and Nitroimidazole) were repeated in both years’ exams and participants from the intervention cohort outperformed the control group in all questions; however, only two of the four (on Cytarabine, Trastuzumab) showed a significant difference between control and intervention groups, with an increase in SOLO scoring for the intervention group (by 0.8 units, , and 0.6 units, , resp.). The other two questions (on Mitomycin, Nitroimidazole) did not show a significant difference between groups; however, a nonsignificant increase in SOLO scoring was observed (by 0.4 and 0.4 units, resp.) for these two questions.

Finally, students from the control group performed significantly better () in the reference exam question for which there was no e-learning tool (question 8), which examined the factual and procedural knowledge domain in addition to understanding and analyses from the cognitive domain. When comparing overall performance in the two groups across semesters, a decrease in the level of SOLO scoring from semester one to semester two was observed for both groups (Table 6). However, while the decrease in performance for the control group was significant (a decrease of 0.5 units, ), the decrease in performance for the intervention group was not significant (a decrease of 0.3 units, ), showing that the intervention group maintained their level of performance (as scored by SOLO) even when faced with more complex material.

To model the level of student understanding while controlling for possible confounding variables, we performed backward linear regression analysis, separately, for each semester. For semester one, four models were generated with the most significant model () containing the variables intervention group (control versus intervention) and GPA, with the variables domestic/international, age, and gender removed from the model. This model explained approximately 38.4% of the variance in semester one level of understanding (), with GPA as the most significant predictor of level of understanding (; ); however, the intervention group was not significant as a predictor of the level of understanding (; ). Student status (domestic or international), age, and gender were not shown to be significant predictors (). For semester two, again, the model containing the variables intervention group and GPA was the most significant of the four models generated (). This model explained approximately 31.1% of the variance in semester two level of understanding (). Again, GPA was the most significant predictor of level of understanding (; ); however, intervention group was also shown to be a highly significant predictor for semester two (; ). Students who used the e-learning tools had an increase of about 0.35 in their total level of understanding SOLO score. This may be because student uptake of the e-learning tools was significantly higher in semester two than semester one (semester one versus semester two, overall number of hits = 555 versus 1054; ), showing the significant effect of the e-learning tools on student performance in semester two. This was further supported by a correlation analysis of student performance (as scored by SOLO level of understanding) in each question and the usage level of the corresponding e-learning tool for that question (Figures 1 and 2). In both semesters, a strong positive correlation was observed showing that tools that were used more frequently had higher performance levels on the corresponding exam question.

4. Discussion

Our team was able to successfully develop and embed 148 e-learning tools designed to meet our pharmacology curriculum’s learning objectives and underpinned by relevant teaching theories, using commercially available software packages such as iSpring Pro and PowerPoint. The advantage of these e-learning tools designed and developed in-house is that, in addition to the explicit alignment in their content and context with our curriculum, educators can easily update content to match evolving course learning objectives or changed practices, unlike commercially available tools developed by trained programmers using complex software packages.

To analyse the benefit of e-learning tools, we used Bloom’s revised taxonomy to classify the questions according to the knowledge and cognitive dimensions they examine for and then scored student level of understanding using SOLO taxonomy when attempting the questions. To examine overall results, we averaged the total student performance (as measured by SOLO score) on all questions except the reference question and found that the students from the intervention group outperformed their peers, though the difference was not significant. Similar results were found by a study on secondary school children to evaluate the impact of e-learning tools on student level of understanding using SOLO taxonomy [33]. Another study for secondary school students found that e-learning tools helped students to proceed into higher level of understanding when compared with the traditional teaching method [37]. It is important to note that our study is the first to use both Bloom’s revised and SOLO taxonomies to analyse student level of understanding and evaluate the effect of e-learning tools within medical sciences in higher education. This has been a successful approach in our study and should be implemented in future studies to assist researchers in assuring that questions requiring comparable thought processes (remembering versus evaluating, for example) are compared across cohorts. This will further assist with comparing results across different studies as well.

Regarding the effect of the e-learning tools on student performance, despite the fact that the overall averaged performance did not show a significant effect, an individual examination of questions that were common to both years and which had the same classification under Bloom's taxonomy showed that e-learning tools had some improving effect on student understanding, although this improvement was significant for only two of the questions (on Cytarabine and Trastuzumab; Table 5). For the remaining short-answer questions, an increase in SOLO scores was observed which was not significant; nevertheless, it is possible that this is due to a smaller effect size of the e-tools corresponding to less power to detect a significant effect for those questions.

This is supported by an examination of the student engagement in Figures 1 and 2, which showed that a positive correlation exists between the number of hits on e-tools and the SOLO scoring, with higher engagement showing higher SOLO scoring on average. Furthermore, the two specific questions that showed a significant increase in SOLO scoring in the intervention group had had higher student engagement with the e-learning tools in terms of number of hits (40 and 25 hits for Cytarabine and Trastuzumab, resp.), while the nonsignificant questions had lower SOLO scoring and lower engagement (less than 20 hits each). It is likely that the lower levels of engagement with these tools resulted in a smaller tool effect than for tools with higher engagement levels; thus with identical sample size, the statistical power may not have been high enough to detect a significant result, even if a true effect of the tools exists. With the two questions that had high engagement, a higher tool effect (a greater improvement in SOLO scores) resulted, explaining why these two questions detected a significant result with the same sample size. Ideally, if the study could be repeated with equivalent levels of student engagement with all tools, a more precise estimate of the significance of the e-learning tool effect could be estimated. However, this may be difficult to implement in practice, as better measures of student engagement with the e-learning tools would be required, as well as difficulty in compliance of students with engaging with all e-learning tools equally. However, this hypothesis could also be further investigated with a larger study, which should therefore have more power to detect a smaller effect size.

Furthermore, it is interesting to note that students from the control group outperformed their colleagues in Q8, the reference question for which no e-learning tool covered the concept, and that the difference was so great (0.8 units) that this was highly significant (Table 5). This shows that, in absence of the e-learning tools, the control group scored better than the intervention group. This suggests that the control group were not poorer performers than the intervention group in general, as they could outperform the intervention group on this question. The survey results (Table 2) also confirm that there are no additional significant differences between the control and intervention groups in terms of either their demographics (including GPA) or their study behaviors (in terms of lecture attendance, pre-lecture preparation, etc.).

Demographic variables such as gender, GPA, age, and background have been shown to influence exam performance/level of understanding. Previous research suggested that males usually have positive experience with technology while females do not like to learn from computers and prefer person-to-person learning [38, 39]. another research has suggested that age could impact student performance and interaction with technology [40]. Thus, we analysed demographic variables in our participants to ensure that there were similar characteristics of students in all groups and found that there were no significant differences in the distribution of the key demographic variables between groups. While the performance of the students in other courses was not recorded, GPA is positively correlated to performance in other courses, so this captures the variation in performance due to overall academic standing. This supports the conclusion that it was predominantly the e-learning tool intervention that the groups differed on and which contributed to the observed outperformance of the intervention group over the control group on the e-learning tool questions. Additionally, we also modelled the level of student understanding by performing backward linear regression analysis for each semester while controlling for these possible confounding variables. This analysis demonstrated that age, gender, and background did not have a significant effect on the level of understanding. The analysis confirmed that GPA did significantly affect student level of understanding, conforming to the widely accepted conclusion that GPA is a strong indicator of academic performance; [40, 41] however, GPA alone did not account for the improvement in students’ level of understanding, and the e-learning tools were also found to be a significant predictor for student level of understanding. Thus, the benefit of e-learning tools still remained even when student GPA was taken into account.

However, the significant impact of the e-learning tools was only observed when student engagement with the e-tools was at high levels. In semester one, the total performance was not significantly different between the groups; however, analysis of student engagement with the e-learning tools revealed low level of usage and engagement. This was because students either forgot or did not have time to access the tools, as reported in a previous study of student engagement in this cohort [42]. This was addressed in semester two, by constantly reminding the students about the e-tools throughout the semester, which led to a significantly higher engagement with the e-learning tools in semester two [42]. The increase in student engagement with the e-learning tools was reflected in their level of understanding; students from the intervention group outperformed their peers from the control group in every short answer question reinforced by e-learning tools. This was also confirmed by further analysis which showed strong positive correlation between e-learning tool usage and student level of understanding in the exam. A previous study reported a similar conclusion by showing a strong relationship between study materials usage and exam performance [43].

E-learning tools also appeared to mitigate the effect of a decrease in student performance as students move on from a less complex course (Human Pharmacology I, semester one) to a more complex course (Human Pharmacology II, semester two). It is commonly acknowledged that academics should not challenge students with difficult concepts at the start of their courses [44], and instead the focus should be to introduce them to the environment of the course and then include the difficult content in the final stages of the course [44]. Therefore, the Human Pharmacology curriculum was structured to start from simple modules in Human Pharmacology I, in order to build student knowledge and then proceed to more complicated and complex modules involving processes and mechanisms in Human Pharmacology II. A typical example of a more complex module is the mechanism of cancer drugs in the semester two Human Pharmacology II course, where students usually struggle to digest the mechanism of action. Thus, we expected to observe an overall drop in the level of student performance (as measured by SOLO scores) from semester one as compared to semester two, given that semester two was a more complex course. Although, in general, a decrease in SOLO scores was observed for both groups, only the decrease in performance among the control cohort was found to be statistically significant. Further, students from the control group achieved only a unistructural level of understanding (lower SOLO scoring) when answering questions related to cancer drugs in semester two, while students from the intervention group scored a higher level of understanding (higher SOLO scoring). This further supports the benefit of e-learning tools on student level of understanding when students move from introductory courses to more complex courses in the same field.

However, one limitation of the study was that the existence of e-learning tools available on specific topics may have acted as a “signpost” for students to focus their efforts on these topics or that the frequent email reminders for students in semester two to use the e-tools may have also served as a generalized study reminder and thus resulted in the higher scoring in the interventions groups. While it is possible that such a signposting effect may have drawn the students’ attention to these topics, resulting in a significant effect only in exam questions covered by e-tools and resulting in the control group significantly outperforming the intervention group on the reference question, it is unlikely that this would be the sole reason for the significant differences between the groups. This is due to the fact that differences in performance were observed for different e-tool topics, with the higher performance observed in questions that had highly used e-tools (Figures 1 and 2). If either a signposting effect existed or the reminders simply increased overall levels of study, an increase in performance scores over all tools equally as well as for the reference question would be expected and not an improvement for the e-learning tools showing higher engagement. However, a positive correlation between usage and improvement in SOLO scoring was observed, suggesting that it was usage of the e-tools and not either signposting or generalized study reminders that resulted in an increase in performance. Furthermore, it was observed from the survey results in phase I that study behaviours between the two groups were not significantly different (Table 2). However, controlling for levels of student interaction with the tools (if a more detailed record of their engagement levels can be developed) as well as asking for additional information regarding study behaviours and sending generalized study reminders to the control cohort as well as the intervention cohort would be an improvement to the design of future studies.

5. Conclusion

This study evaluated the effects of a set of in-house designed e-learning tools, embedded as supplements to standard pharmacology curricula in semesters one and two, and found a number of significant benefits for student learning. E-learning tools appeared to significantly improve student level of understanding as scored by the SOLO taxonomy when there was substantial engagement of students with the e-tools. We also found that e-learning tools appeared to mitigate the decrease in student level of performance observed when students progress into more complex courses. The study also demonstrated that a holistic approach underpinned by educational pedagogy could be employed to objectively evaluate the impact of technology on student learning, effectively comparing different student cohorts using Bloom’s revised taxonomy to classify exam questions into common learning dimensions, and using SOLO taxonomy scoring to evaluate student level of understanding instead of using only exam grades. Our approach and findings contribute to the scholarship of learning and teaching (SoLT) in relation to e-learning tools and may potentially enhance both pharmacology and other courses by providing a framework on standardising the evaluation of the impact of online learning strategies on student performance and learning experiences.

Conflict of Interests

The authors declare that there is no conflict of interests regarding the publication of this paper.

Acknowledgments

The authors would like to acknowledge the role of third year pharmacy students 2011 cohort for undertaking the study survey and 2012 cohort for using the e-learning tools and participating in the study survey. The authors would like to thank the faculty of Griffith Health at Griffith University for providing the blended learning grant that funded this work.