Table of Contents Author Guidelines Submit a Manuscript
Nursing Research and Practice
Volume 2018, Article ID 7437386, 10 pages
https://doi.org/10.1155/2018/7437386
Research Article

The Impact of a New Pedagogical Intervention on Nursing Students’ Knowledge Acquisition in Simulation-Based Learning: A Quasi-Experimental Study

1Department of Nursing and Health Sciences, University of South-Eastern Norway, Post Box 235, 3603 Kongsberg, Norway
2Department of Nursing Science, University of Oslo, Post Box 1130, Blindern, 0318 Oslo, Norway

Correspondence should be addressed to Thor Arne Haukedal; on.nsu@ladekuah.a.roht

Received 18 April 2018; Revised 9 July 2018; Accepted 16 August 2018; Published 1 October 2018

Academic Editor: Kathleen Finlayson

Copyright © 2018 Thor Arne Haukedal et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Abstract

Simulation-based learning is an effective technique for teaching nursing students’ skills and knowledge related to patient deterioration. This study examined students’ acquisition of theoretical knowledge about symptoms, pathophysiology, and nursing actions after implementing an educational intervention during simulation-based learning. A quasi-experimental study compared theoretical knowledge among two groups of students before and after implementation of the intervention. The intervention introduced the following new components to the existing technique: a knowledge test prior to the simulation, video-recording of the performance, and introduction of a structured observation form used by students and facilitator during observation and debriefing. The intervention group had significantly higher scores on a knowledge test conducted after the simulations in comparison to the scores in the control group. In both groups scores were highest on knowledge of symptoms and lowest on knowledge of pathophysiology; the intervention group had significantly higher scores than the control group on both topics. Students’ theoretical knowledge of patient deterioration may be enhanced by improving the students’ prerequisites for learning and by strengthening debriefing after simulation.

1. Introduction

Simulation-based learning (SBL) is a technique [1] widely used in nursing education, the use of which as an educational tool to achieve a wide range of learning outcomes has been supported by a multitude of studies [2]. One important outcome of nursing education is improved recognition and management of patient deterioration; these are essential nursing skills that students should begin to develop while in school [3], and students need a wide range of knowledge to recognize and act upon the signs of deterioration.

The relationship between theory and practice is a complex challenge in professional education. This is widely documented and commonly termed as a “gap” [4]. To reduce this gap, theoretical knowledge and practical experience must be integrated. SBL is a pedagogical approach that can be considered a “third learning space” between coursework and practicums; this approach may bring the content and process of theoretical work and practical training closer to each other [5].

SBL has been said to be a more effective teaching strategy than classroom teaching for the development of assessment skills for the care of patients with deteriorating conditions [6]. Simulation provides an opportunity to be exposed to critical scenarios and can highlight the clinical signs and symptoms the students will have to deal with in these situations [7]. It provides inexperienced students the opportunity to use their knowledge in a simulated situation, which mirrors the clinical context without the risk of harming actual patients [8]. Situated learning theory claims that learning is influenced by the context in which it occurs [9]. Tun, Alinier, Tang, and Kneebone [10] argue that the aspect of fidelity may hinge on the learners perceived realism of the context and that a simulation may seem realistic to students who lack experience. Lavoie, Pepin, and Cossette [11] call for educational interventions that can enhance nursing students’ ability to recognize signs and symptoms in patient deterioration situations.

Many studies claim that SBL may improve theoretical knowledge acquisition [1216]. A review by Foronda, Liu, and Bauman [17] suggested that simulation was an effective andragogical method for teaching skills and knowledge and called for more research to strengthen the evidence related to what types of nursing knowledge and nursing content could be effectively developed through SBL.

A review of empirical studies of educational interventions related to deteriorating patient situations showed that few used objective assessment as an outcome measure: only just over one-third measured improvement in knowledge, skills, and/or technical performance [18]. On the other hand, a plethora of studies are concerned with the students’ experiences during SBL; these studies have found that students generally show a high degree of satisfaction with SBL as an educational technique because they often experience increased knowledge and confidence [3, 19, 20]. At the same time, however, there is a broad tendency for nursing students to overestimate their skills and knowledge in self-reports [21]. An essential component of quality assurance in nursing education thus remains to evaluate students’ knowledge acquisition [22].

The present study compared acquisition of theoretical knowledge by two cohorts of nursing students in the course of six simulated scenarios on patient deterioration before and after the implementation of an educational intervention during simulation-based learning. The intervention aimed to improve students’ learning prerequisites and strengthen debriefing in simulation. The aim of the study was to explore whether there was a difference in students’ knowledge level before and after the educational intervention. The following research questions were developed:(i)What are the differences in posttest scores on the knowledge test between the control group and the intervention group?(ii)What are the differences between stimulus (pre-) and posttest scores in the intervention group?

2. Methods

2.1. Intervention

In this study, we present an intervention to enhance students’ theoretical knowledge via simulation-based learning and measure this development using an objective assessment. The intervention was inspired by the First2Act model, as described by Buykx and colleagues [3, 20]. First2Act was developed to improve nurses’ emergency management skills [20], and it comprises five components: developing core knowledge, assessment, simulation, reflective review, and performance feedback. These components were set on the basis of experiential learning theory and empirical pedagogical literature (e.g., [23, 24]). A lack of theory-based research in simulation hampers the development of coherence and external validity in this field of research [25]. The present study uses First2Act as an explicit theoretical framework. Its distinct components are pedagogically founded and are hypothesized to enhance learning in advanced simulation [20]. Due to the importance of feedback in simulation-based learning [26], the feedback processes were enhanced beyond those introduced by First2Act [20].

The simulations were conducted before the students’ first clinical practicum in hospital medical or surgical units. Simulation training before a practicum can, if the experiences reflect the way knowledge and actions will be used in actual practice, provide the students with authentic activities that mirror the forthcoming experiences in the real world of nursing [9]. In both cohorts, the students participated in a total of six scenarios where the patient developed a deteriorating condition, respectively, angina pectoris, cardiac arrest, hypoglycemia, postoperative bleeding, worsening of obstructive lung disease, and ileus. The scenarios were inspired by scenarios already created by the National League for Nursing and Laerdal, a medical company (Laerdal Medical Corporation 2008), and refined in collaboration with practicing nurses to suit a Norwegian context. The scenarios were carried out over two days, meaning that the students were given the repeated opportunity to collaborate on the assessment and treatment of deteriorating patients and to repeatedly go through the cycles of reflection before, in, and on action [27]. The students were organized into previously established learning groups each consisting of 5–9 students. Students in both cohorts had completed theoretical education on pathology, nursing subjects, and basic life support and had learned a variety of practical nursing skills in the simulation center.

Two students acted as nurses in each scenario, with one as the leading nurse; the remaining students were observers or next-of-kin. All students acted the role of leading nurse at least once during simulation and most twice. The students received a short synopsis of the six scenarios one week before the simulation to give them the opportunity to prepare for the simulations. During simulation, one of the faculty members had the role of facilitator, while another operated the manikin VitalSim (Laerdal Medical, Norway). Table 1 details the structure of the scenario simulation in both cohorts participating in the study.

Table 1: Structure of the scenario simulation for both cohorts.

Our intervention introduced the following new components to the existing procedure.

(1) A knowledge test with multiple-choice questions (MCQs) was conducted one week before simulation as a stimulus for learning, to give the students the opportunity to prepare in advance. The questions covered core knowledge associated with each of the scenarios; students received individual electronic feedback giving the correct answers.

(2) The simulation performance was video-recorded on an iPad (Table 1).

(3) While observers and facilitators in the 2013 cohort gave feedback in relation to general learning outcomes, a structured observation form was developed for the 2014 cohort that covered scenario-specific observations and actions (Table 1), for example, measuring blood pressure, correct medication administration, when to call for help, and priority of actions.

(4) The debriefing was divided into two sessions: First, the students who had performed the simulation watched the video-recording, allowing for an assessment of their own performance (Table 1). Meanwhile, the observers planned and discussed the feedback they would provide the performing students, with reference to the structured observation form, and the facilitator and operator did likewise. The observation form described correct nursing actions and observations related to scenario-specific learning outcomes. Second, a facilitator-led debriefing was conducted following the framework described in Debriefing Assessment for Simulation in healthcare (DASH)® [28]. The new observation form also guided the facilitators during debriefing.

2.2. Study Design

This study used a two-group quasi-experimental design [29] that compared students’ knowledge acquisition between a control and an intervention group. The control group experienced simulated scenarios according to their existing study program, while the intervention group experienced simulated scenarios based on a new pedagogical design, implemented one year later.

2.3. Sample and Setting

All students in the second year of their bachelor’s degree in nursing at a university college in Norway were invited to participate in the study in 2013 (99 students) and in 2014 (91 students). The students were informed about the study in class and on the institution’s digital learning platform. In December 2013, the 68 students who agreed to join the study participated in the scenario simulations as the control group; of these, 60 agreed to participate in the posttest. In December 2014, the 69 students who agreed to join took part in the scenario simulations as the intervention group; of these, 53 agreed to participate in the posttest. Of these 53 students, 40 participated in the stimulus test conducted before the simulation.

2.4. Development of the Instrument

The instrument had two sections. The first section included demographic data: age, gender, if they had worked in the health service and if so how long, and if they had experiences with simulation. The second section consisted of a multiple-choice questionnaire (MCQ) developed to function as the stimulus test as well as the posttest (Table 2).

Table 2: Multiple-choice questionnaire.

The questionnaire consisted of 18 questions related to the deterioration of the patient’s condition, 3 from each of the 6 different scenarios that students simulated. Six questions covered pathophysiology, for example, “Which situations can lead to hypovolemia?”; five questions covered symptoms, for example, “What are the two symptoms that can be present during an attack of angina pectoris?”; five questions covered nursing actions, for example “How do you handle an unconscious diabetic patient?”; and two questions covered prioritization of nursing actions, for example, “Range in prioritized order actions with a patient with cardiac arrest.” Of the 18 questions, 14 had 4 answer options, where students should mark off 2 correct answers; 2 had 3 answer options, where students should mark off 1 correct answer; and in the last 2 questions students were required to rank 4 answer options. The MCQ was developed by the facilitators, and content validity was established by experienced practicing nurses. The instrument had medium internal consistency, with Cronbach’s alpha of 0.62 for the control group, 0.73 for the intervention group, and 0.62 for the stimulus test. These somewhat low numbers may be because the instrument focused on multiple content areas but had only 18 questions.

2.5. Data Collection and Analysis

Both posttests were completed as paper-and-pencil tests on a written form immediately after the last scenario simulation. This format was chosen to achieve the highest possible response rate [30]. The stimulus test for the intervention group was completed electronically through a digital learning platform one week before the SBL started.

Data analysis was performed with SPSS, version 22. Homogeneity between the groups was tested with descriptive summary statistics; then, knowledge scores were analyzed with descriptive statistics, and comparisons between the control and intervention groups were conducted with independent-samples t-test. A paired-sample t-test was used to assess difference between knowledge scores from the stimulus test and posttest mean scores in the intervention group. Effect size was computed using Cohen’s d [31]. Age-related differences in scores were analyzed with analysis of variance (ANOVA).

2.6. Ethical Considerations

The study was approved by the university college where it was conducted and the Norwegian Social Science Data Services (project number 36135). Return of the questionnaire was considered to constitute consent to participate.

3. Results

Participant characteristics are shown in Table 3.

Table 3: Characteristics of the participants.

There was homogeneity on all tested characteristics between the two groups of students who participated in the posttest, with the exception of years of work experience (Table 3). We did not control for the differences in years of work experience at baseline.

Mean scores and standard deviations were calculated among all students who completed the knowledge test. There was a significant improvement in posttest knowledge scores between the control group (M=8.9 SD=3.2) and the intervention group (M=11.2 SD=3.5), p< 0.001. Effect size was d=0.68, considered a medium-sized effect [22].

The participants were divided into three age categories. Mean knowledge score for participants <22 years old in the control group was 8.5 (n=37) and in the intervention group 10.5 (n=31); participants aged 22–26 years had mean score of 9.1 (n=15) in the control group and 11.8 (n=8) in the intervention group; and mean score for participants >26 years was 10.6 (n=8) in the control group and 12.4 (n=14) in the intervention group. These differences were not statistically significant.

The intervention group had significantly higher scores than the control group on questions concerning pathophysiology knowledge (p=0.001) and knowledge of symptoms (p<0.001) (Figure 1). Within both groups, questions concerning pathophysiology had lower scores than questions about symptoms and nursing actions.

Figure 1: Posttest knowledge scores before and after the intervention. p=0.001 p<0.001.

Forty students in the intervention group completed both the stimulus test and the posttest; their scores were significantly higher on the posttest than on the stimulus test (M=12.0 SD=3.2 versus M=8.9 SD=3.1), p< 0.001. Effect size was d=1.1, considered as a large-sized effect [29].

4. Discussion

The purpose of the study was to examine the role of the educational intervention in the students’ acquisition of knowledge. The knowledge test results show significantly greater improvement in posttest scores in the intervention group than in the control group. One component of the intervention was the introduction of a stimulus test, and because we wanted to investigate the effect of this component, no such test was given to the control group. It was found that there was a significant improvement in posttest scores compared with scores on the stimulus test. In the following, we will discuss the specific elements of the intervention that may have impacted the difference in the results between the two groups.

First, introduction of a knowledge test before the scenario simulation may stimulate students to strengthen their cognitive learning and is one aspect of participant preparation as described in the International Nursing Association for Clinical Simulation and Learning (INACSL) “Standards of Best Practice: Simulation” [32]. This is because a stimulus for learning can make the students more aware of both the knowledge they have and the knowledge they lack. It can thus serve as a “repetition trigger” prompting students to brush up on relevant topics such as pathophysiology as a preparation for simulation exercises. Baseline theoretical knowledge is necessary to acquire competence acquisition through SBL [33]. Similarly, Flood and Higbie [34] found that a relevant didactic lecture could be useful to strengthen cognitive learning on blood transfusion. To discover one’s own lack of knowledge in a test conducted prior to the simulation may encourage more preparation and may furthermore increase students’ attention during simulation and debriefing because they recognize the relevance of the test content as they practice and reflect. This strengthened knowledge base gives more substance to students’ reflections and problem analysis during debriefing and thereby improves their knowledge development [23].

Viewing of video-recordings by students in the roles of nurses was the second component of the intervention that we see as potentially helpful. Video-assisted debriefing led by a facilitator is often used in SBL, although its effect is uncertain [35]. By completing the scenario and then watching their own video-recorded performance immediately afterward, the students who had acted as nurses got an opportunity to assess themselves; similar to the stimulus test, self-assessment can make students aware of both the knowledge they possess and the knowledge they lack to help them perform necessary actions. Video-based self-assessment in particular can help students develop awareness of their strengths and weaknesses [36]. However, it has been reported that some students “felt ashamed” when watching themselves onscreen [37]; to preclude this, we decided that our participants should view their performance alone, without interference of teacher and peers, so as to focus on learning, not on the judgment from others. Thereby, the students who had acted in the scenario were also given the opportunity to gain the observer’s perspective. We expected that this would have reduced stress and thereby provided the opportunity for improved preparation before debriefing, also leaving them readier to focus on knowledge development together with their peers during the debriefing session.

The third new element of the intervention was the structured observation form, which focused on specific actions in each scenario. In this study, both the observers and the facilitator used the same observation form, which may have contributed to a clear focus on knowledge of the signs of deteriorating conditions; when students respond appropriately and then verbalize their deliberations, knowledge application is taken to be demonstrated [33]. An observation form functions as a tool that mediates learning [38, 39] and draws students’ attention to the importance of change in patients’ condition. The use of observation tools has been reported to engage the observer in learning [4042] and facilitate observational learning by focusing on important aspects [4345]. The observation form may have triggered assessments of actions based on specific professional knowledge rather than an overall approach.

A schedule of six scenarios in the course of two days afforded the students the chance for repetitive practice of important actions involved in handling deteriorating patients. Although the scenarios were different, the focus remained on key observations and actions to counteract deterioration, allowing ample practice for observing and handling common events such as low blood pressure or oxygen deficiency. Repetition is recommended as a best practice in learning [46]. Marton [47] argues that students need to be exposed not only to similarities but also to differences, in order to connect knowledge to different situations. The observation form highlighted key observations and may have helped the students to verbalize these aspects, make them explicit, and thereby promote transfer of knowledge from one situation to another. The students were exposed to many variations through the scenarios, and this may have improved their knowledge about symptoms, explaining why they had the highest score on symptoms.

Students’ knowledge scores increased with age, in both groups. The finding was not significant but could indicate a trend. Though Shinnick, Woo, and Evangelista [48] claim that age is not a predictor of knowledge gain, increasing age may nevertheless indicate greater beneficial experience; thus, the stimulus test may be of even greater importance to younger students as a stimulus to learning—perhaps especially during SBL, for which baseline theoretical knowledge is one of several necessary antecedents [3, 33].

Both groups had the highest scores on knowledge of symptoms, lower on appropriate nursing actions, and lowest on knowledge of pathophysiology. We can only speculate with regard to this finding that it may be easier to acquire knowledge about symptoms and actions because this type of knowledge can be enhanced through visualization—by handling the actual symptoms of deteriorating patients, watching themselves on video, and taking the observer role. The use of manikins can be advantageous in this regard because symptoms can be portrayed via manikin’s software, which can increase student’s attention to the symptoms. It is also possible that pathophysiology requires a deeper understanding, meaning that it involves knowledge as justification for action. Recognizing symptoms in time is an important part of identifying signs of deteriorating conditions [12], and therefore high scores on knowledge of symptoms are a valuable finding. The intervention group had significantly higher scores than the control group on knowledge of pathophysiology and symptoms. This indicates that the new components of stimulus test, video viewing, and observer forms positively influenced the students’ acquisition of knowledge. Both groups of students had limited clinical experience at this point in their education, which may explain why they did not achieve higher scores in general.

5. Limitations of the Study

The findings of this study may be of interest to educators because how to enhance students’ knowledge acquisition is an increasingly important issue in SBL. The results of this study, however, are based on only a small sample recruited from only one school of nursing, which limits their generalizability. We used a convenience sample in this study, and the intervention group had 1.1 years more work experience than the control group. It is possible, though difficult to gauge, if levels of work skills could influence these students’ overall scores. However, there was no significant difference in the two student groups’ experience with critically ill patients.

Although the use of MCQs is a common approach in knowledge assessment, there are discussions about whether they really fit the purpose [49, 50]. Here, because stimulus test and posttest consists of the same questions, we are aware that students may remember correct answers from the stimulus test and therefore score higher on the posttest. This may mean that students have primarily gained knowledge from the stimulus test and not the other components of the intervention. Nevertheless, the significant increase in scores between the stimulus test and posttest in the intervention group suggests that the other components also are decisive in the students’ knowledge acquisition. In addition, knowledge was measured only one time after the simulation, thus yielding no information on long-term knowledge retention. Finally, correct answers on MCQs do not necessarily correspond with students’ actions in real situations of patient deterioration.

6. Conclusion

Students’ knowledge scores were compared before and after an educational intervention during SBL. The results showed significantly greater improvement of scores in the intervention group than in the control group. Based on these findings, we assume that pedagogical underpinning of SBL, which emphasizes improvement of students’ prerequisites for learning and strengthens the debriefing, can positively influence students’ knowledge acquisition.

Data Availability

The underlying data will be available through the USN Research Data Archive, DOI 10.23642/usn.6148562.

Conflicts of Interest

The authors declare that there are no conflicts of interest.

References

  1. D. Gaba, “The future vision of simulation in health care,” BMJ Quality & Safety, vol. 13, pp. i2–i10, 2004. View at Publisher · View at Google Scholar
  2. W. M. Nehring and F. R. Lashley, “Nursing Simulation: A Review of the Past 40 Years,” Simulation & Gaming, vol. 40, no. 4, pp. 528–552, 2009. View at Publisher · View at Google Scholar · View at Scopus
  3. P. Buykx, S. Cooper, L. Kinsman et al., “Patient deterioration simulation experiences: Impact on teaching and learning,” Collegian, vol. 19, no. 3, pp. 125–129, 2012. View at Publisher · View at Google Scholar · View at Scopus
  4. I. K. R. Hatlevik, “The theory-practice relationship: Reflective skills and theoretical knowledge as key factors in bridging the gap between theory and practice in initial nursing education,” Journal of Advanced Nursing, vol. 68, no. 4, pp. 868–877, 2012. View at Publisher · View at Google Scholar · View at Scopus
  5. P. F. Laursen, “Multiple bridges between theory and practice,” in From vocational to professional education: educating for social welfare, M. S. Jens-Christian Smeby, Ed., pp. 89–104, London, UK, Routledge, 2015. View at Google Scholar
  6. C. D. Merriman, L. C. Stayt, and B. Ricketts, “Comparing the effectiveness of clinical simulation versus didactic methods to teach undergraduate adult nursing students to recognize and assess the deteriorating patient,” Clinical Simulation in Nursing, vol. 10, no. 3, pp. e119–e127, 2014. View at Publisher · View at Google Scholar · View at Scopus
  7. M. A. Kelly, J. Forber, L. Conlon, M. Roche, and H. Stasa, “Empowering the registered nurses of tomorrow: Students' perspectives of a simulation experience for recognising and managing a deteriorating patient,” Nurse Education Today , vol. 34, no. 5, pp. 724–729, 2014. View at Publisher · View at Google Scholar · View at Scopus
  8. G. A. DeBourgh and S. K. Prion, “Using Simulation to Teach Prelicensure Nursing Students to Minimize Patient Risk and Harm,” Clinical Simulation in Nursing, vol. 7, no. 2, pp. e47–e56, 2011. View at Publisher · View at Google Scholar · View at Scopus
  9. E. L. Onda, “Situated Cognition: Its Relationship to Simulation in Nursing Education,” Clinical Simulation in Nursing, vol. 8, no. 7, pp. e273–e280, 2012. View at Publisher · View at Google Scholar · View at Scopus
  10. J. K. Tun, G. Alinier, J. Tang, and R. L. Kneebone, “Redefining Simulation Fidelity for Healthcare Education,” Simulation & Gaming, vol. 46, no. 2, pp. 159–174, 2015. View at Publisher · View at Google Scholar · View at Scopus
  11. P. Lavoie, J. Pepin, and S. Cossette, “Development of a post-simulation debriefing intervention to prepare nurses and nursing students to care for deteriorating patients,” Nurse Education in Practice, vol. 15, no. 3, pp. 181–191, 2015. View at Publisher · View at Google Scholar · View at Scopus
  12. D. Fisher and L. King, “An integrative literature review on preparing nursing students through simulation to recognize and respond to the deteriorating patient,” Journal of Advanced Nursing, vol. 69, no. 11, pp. 2375–2388, 2013. View at Publisher · View at Google Scholar · View at Scopus
  13. S. Lapkin, T. Levett-Jones, H. Bellchambers, and R. Fernandez, “Effectiveness of Patient Simulation Manikins in Teaching Clinical Reasoning Skills to Undergraduate Nursing Students: A Systematic Review,” Clinical Simulation in Nursing, vol. 6, no. 6, pp. e207–e222, 2010. View at Publisher · View at Google Scholar · View at Scopus
  14. P.-J. Oh, K. D. Jeon, and M. S. Koh, “The effects of simulation-based learning using standardized patients in nursing students: A meta-analysis,” Nurse Education Today , vol. 35, no. 5, pp. e6–e15, 2015. View at Publisher · View at Google Scholar · View at Scopus
  15. S. Shin, J.-H. Park, and J.-H. Kim, “Effectiveness of patient simulation in nursing education: Meta-analysis,” Nurse Education Today , vol. 35, no. 1, pp. 176–182, 2015. View at Publisher · View at Google Scholar · View at Scopus
  16. H. B. Yuan, B. A. Williams, J. B. Fang, and Q. H. Ye, “A systematic review of selected evidence on improving knowledge and skills through high-fidelity simulation,” Nurse Education Today , vol. 32, no. 3, pp. 294–298, 2012. View at Publisher · View at Google Scholar · View at Scopus
  17. C. Foronda, S. Liu, and E. B. Bauman, “Evaluation of simulation in undergraduate nurse education: An integrative review,” Clinical Simulation in Nursing, vol. 9, no. 10, pp. e409–e416, 2013. View at Publisher · View at Google Scholar · View at Scopus
  18. C. J. Connell, R. Endacott, J. A. Jackman, N. R. Kiprillis, L. M. Sparkes, and S. J. Cooper, “The effectiveness of education in the recognition and management of deteriorating patients: A systematic review,” Nurse Education Today , vol. 44, pp. 133–145, 2016. View at Publisher · View at Google Scholar · View at Scopus
  19. S. Lapkin, T. Levett-Jones, and C. Gilligan, “A systematic review of the effectiveness of interprofessional education in health professional programs,” Nurse Education Today , vol. 33, no. 2, pp. 90–102, 2013. View at Publisher · View at Google Scholar · View at Scopus
  20. P. Buykx, L. Kinsman, S. Cooper et al., “FIRST2ACT: Educating nurses to identify patient deterioration - A theory-based model for best practice simulation education,” Nurse Education Today , vol. 31, no. 7, pp. 687–693, 2011. View at Publisher · View at Google Scholar · View at Scopus
  21. S. Y. Liaw, A. Scherpbier, J.-J. Rethans, and P. Klainin-Yobas, “Assessment for simulation learning outcomes: A comparison of knowledge and self-reported confidence with observed clinical performance,” Nurse Education Today , vol. 32, no. 6, pp. e35–e39, 2012. View at Publisher · View at Google Scholar · View at Scopus
  22. P. H. Bailey, S. Mossey, S. Moroso, J. D. Cloutier, and A. Love, “Implications of multiple-choice testing in nursing education,” Nurse Education Today , vol. 32, no. 6, pp. e40–e44, 2012. View at Publisher · View at Google Scholar · View at Scopus
  23. D. Boud and N. Falchikov, “Aligning assessment with long-term learning,” Assessment & Evaluation in Higher Education, vol. 31, no. 4, pp. 399–413, 2006. View at Publisher · View at Google Scholar · View at Scopus
  24. A. Y. Kolb and D. A. Kolb, “Learning styles and learning spaces: enhancing experiential learning in higher education,” Academy of Management Learning and Education (AMLE), vol. 4, no. 2, pp. 193–212, 2005. View at Publisher · View at Google Scholar · View at Scopus
  25. L. Rourke, M. Schmidt, and N. Garga, “Theory-based research of high fidelity simulation use in nursing education: A review of the literature,” International Journal of Nursing Education Scholarship, vol. 7, no. 1, article no. 11, 2010. View at Publisher · View at Google Scholar · View at Scopus
  26. S. B. Issenberg, W. C. McGaghie, E. R. Petrusa, D. L. Gordon, and R. J. Scalese, “Features and uses of high-fidelity medical simulations that lead to effective learning: a BEME systematic review,” Medical Teacher, vol. 27, no. 1, pp. 10–28, 2005. View at Publisher · View at Google Scholar · View at Scopus
  27. D. A. Schön, Educating the Reflectice Practitioner, Jossey-Bass, San Francisco, Calif, USA, 1987.
  28. R. Simon, D. B. Raemer, and J. W. Rudolph, Debriefing Assessment for Simulation in, Healthcare (DASH)© Rater’s Handbook, Center for Medical Simulation, Boston, Mass, USA, 2010.
  29. D. F. Polit and C. T. Beck, Nursing Research : generating and assessing evidence for nursing practice, Wolters Kluwer, Philadelphia, Penn, USA, 10th edition, 2017.
  30. L. Hohwü, H. Lyshol, M. Gissler, S. H. Jonsson, M. Petzold, and C. Obel, “Web-based versus traditional paper questionnaires: a mixed-mode survey with a nordic perspective,” Journal of Medical Internet Research, vol. 15, no. 8, p. e173, 2013. View at Publisher · View at Google Scholar · View at Scopus
  31. J. Cohen, Statistical Power analysis for behavioral sciences, L. Erlbaum Associates, Hillsdale, NJ, USA, 1988.
  32. International Nursing Association for Clinical Simulation & Learning, “INACSL Standards of Best Practice: SimulationSM: Simulation Design,” Clinical Simulation in Nursing, vol. 12, pp. 5–12, 2016. View at Google Scholar
  33. J. Hansen and M. Bratt, “Competence Acquisition Using Simulated Learning Experiences: A Concept Analysis,” Nursing Education Perspectives, vol. 36, no. 2, pp. 102–107, 2015. View at Publisher · View at Google Scholar · View at Scopus
  34. L. S. Flood and J. Higbie, “A comparative assessment of nursing students' cognitive knowledge of blood transfusion using lecture and simulation,” Nurse Education in Practice, vol. 16, no. 1, pp. 8–13, 2016. View at Publisher · View at Google Scholar · View at Scopus
  35. T. Levett-Jones and S. Lapkin, “A systematic review of the effectiveness of simulation debriefing in health professional education,” Nurse Education Today , vol. 34, no. 6, pp. e58–e63, 2014. View at Publisher · View at Google Scholar · View at Scopus
  36. M. S. Yoo, Y. J. Son, Y. S. Kim, and J. H. Park, “Video-based self-assessment: Implementation and evaluation in an undergraduate nursing course,” Nurse Education Today , vol. 29, no. 6, pp. 585–589, 2009. View at Publisher · View at Google Scholar · View at Scopus
  37. J. Pereira, L. Echeazarra, S. Sanz-Santamaría, and J. Gutiérrez, “Student-generated online videos to develop cross-curricular and curricular competencies in Nursing Studies,” Computers in Human Behavior, vol. 31, no. 1, pp. 580–590, 2014. View at Publisher · View at Google Scholar · View at Scopus
  38. R. Säljö, Lärande i praktiken : ett sociokulturellt perspektiv, Prisma, Stockholm, Sweden, 2000.
  39. E. Wenger, Communities of practice : learning, meaning, and identity. Learning in doing: social, cognitive, and computational perspectives, Cambridge University Press, Cambridge, UK, 1998. View at Publisher · View at Google Scholar
  40. G. L. Schaar, M. J. Ostendorf, and T. J. Kinner, “Simulation: Linking quality and safety education for nurses competencies to the observer role,” Clinical Simulation in Nursing, vol. 9, no. 9, pp. e407–e410, 2013. View at Publisher · View at Google Scholar · View at Scopus
  41. M. H. Reime, T. Johnsgaard, F. I. Kvam et al., “Learning by viewing versus learning by doing: A comparative study of observer and participant experiences during an interprofessional simulation training,” Journal of Interprofessional Care, vol. 31, no. 1, pp. 51–58, 2017. View at Publisher · View at Google Scholar · View at Scopus
  42. M. Wighus and I. T. Bjørk, “An educational intervention to enhance clinical skills learning: Experiences of nursing students and teachers,” Nurse Education in Practice, vol. 29, pp. 143–149, 2018. View at Publisher · View at Google Scholar · View at Scopus
  43. C. L. Krogh, C. Ringsted, C. B. Kromann et al., “Effect of engaging trainees by assessing peer performance: A randomised controlled trial using simulated patient scenarios,” BioMed Research International, vol. 2014, Article ID 610591, 7 pages, 2014. View at Publisher · View at Google Scholar · View at Scopus
  44. B. G. Kaplan, C. Abraham, and R. Gary, “Effects of participation vs. observation of a simulation experience on testing outcomes: implications for logistical planning for a school of nursing,” International Journal of Nursing Education Scholarship, vol. 9, no. 1, Article 14, 2012. View at Google Scholar · View at Scopus
  45. K. Stegmann, F. Pilz, M. Siebeck, and F. Fischer, “Vicarious learning during simulations: Is it more effective than hands-on training?” Medical Education, vol. 46, no. 10, pp. 1001–1008, 2012. View at Publisher · View at Google Scholar · View at Scopus
  46. D. A. Cook, S. J. Hamstra, R. Brydges et al., “Comparative effectiveness of instructional design features in simulation-based education: Systematic review and meta-analysis,” Medical Teacher, vol. 35, no. 1, pp. e867–e898, 2013. View at Publisher · View at Google Scholar
  47. F. Marton, “Sameness and difference in transfer,” Journal of the Learning Sciences, vol. 15, no. 4, pp. 499–535, 2006. View at Publisher · View at Google Scholar · View at Scopus
  48. M. A. Shinnick, M. Woo, and L. S. Evangelista, “Predictors of Knowledge Gains Using Simulation in the Education of Prelicensure Nursing Students,” Journal of Professional Nursing, vol. 28, no. 1, pp. 41–47, 2012. View at Publisher · View at Google Scholar · View at Scopus
  49. S. Kardong-Edgren, N. Lungstrom, and R. Bendel, “VitalSim® Versus SimMan®: A Comparison of BSN Student Test Scores, Knowledge Retention, and Satisfaction,” Clinical Simulation in Nursing, vol. 5, no. 3, pp. e105–e111, 2009. View at Publisher · View at Google Scholar · View at Scopus
  50. T. Levett-Jones, S. Lapkin, K. Hoffman, C. Arthur, and J. Roche, “Examining the impact of high and medium fidelity simulation experiences on nursing students' knowledge acquisition,” Nurse Education in Practice, vol. 11, no. 6, pp. 380–383, 2011. View at Publisher · View at Google Scholar · View at Scopus