Research Article | Open Access
Kristian Krogh, Morten Pilegaard, Berit Eika, "Time for Reflection: The Balance between Repetition and Feedback in Resuscitation Training—A Randomised Controlled Trial", Advances in Emergency Medicine, vol. 2015, Article ID 593625, 7 pages, 2015. https://doi.org/10.1155/2015/593625
Time for Reflection: The Balance between Repetition and Feedback in Resuscitation Training—A Randomised Controlled Trial
Background. The simulation literature widely agrees that the reflective learning phase after the simulation is equal to or perhaps of even greater importance than the actual simulated scenario in ensuring learning. Nevertheless, advanced life support (ALS) tends to have many simulated scenarios followed by short feedback sessions. The aim of this study was to compare the ability of two groups of novice learners to stay adherent to the ALS guidelines in their provision of ALS after they had received either 8 or 12 simulated resuscitation scenarios, both in 4 hours. Methods. This study was a randomised controlled trial. Participants were either randomised to the control group with 12 scenarios (15 minutes per scenario) with 5 minutes of feedback or the intervention group with 8 simulations (15 minutes per scenario) with 15 minutes of feedback. Results. There was no statistically significant difference in test scores between the intervention group and control group in the 1-week retention test and the 12-week retention test . Conclusion. This study suggests that the lower number of repetitive ALS simulation scenarios does not diminish learning when the feedback is equally prolonged to ensure sufficient time for reflection.
Current advanced life support (ALS) guidelines recommend a combination of cardiopulmonary resuscitation (CPR) and as few and short pauses as possible to minimise hands-off time during rhythm control and defibrillation to ensure the highest possible quality of resuscitation [1–5]. Adherence to the recommended algorithm is critical as every interruption in chest compressions (CC) decreases the coronary and cerebral perfusion [1, 6, 7]. Achieving high-quality resuscitation requires allocation of sufficient resources to training, including training equipment and instructors. Instructors are often high-value resources, not only in training but also in the clinical setting from which they are recruited for training purposes. These resources must therefore be utilized in the best possible way to maximise resuscitation learning and to justify the involved expenditure.
Knowledge, skills, and teamwork are required to master the ALS algorithm and to achieve high-level quality of resuscitation. Mastery of the ALS algorithm requires a number of skills and competences like pattern recognition, specific knowledge to ensure swift decision-making when examining the heart rhythm, ability to identify the aetiology of the cardiac arrest, and ability to decide which treatment should be instituted in response to the possible, reversible causes of the underlying condition [4, 5]. The many parallel tasks to be handled when examining the heart rhythm may in part lead some ALS instructors to seek to increase repetitions within the same timeframe. This is done by shortening CPR cycles, decreasing the time necessary for each scenario. However, this approach has been reported to involve a risk of negative training  and to introduce a skewed perception of time related to shortened CPR cycles . The number repetition may also be increased by decreasing the time for feedback as this allows for a larger number of scenarios. However, it remains to be elucidated how many scenarios and how much feedback is needed in ALS training to ensure enough repetitions to consolidate knowledge and skills at a sufficient level within the time given.
The simulation literature widely agrees that the reflective learning phase, in the format of, for example, feedback and debriefing [10–12], is equal to or perhaps of even greater importance than the actual simulated scenario to ensure learning. Feedback is defined differently according to the context where it is used. In this study the term “feedback” should be understood as the postscenario interactive, bidirectional, and reflective discussion focused on performance gaps related to the learning process and its outcomes. Ericsson, in his promotion of Deliberate Practice, argues that reflection and guided learning as well as repetition must be accompanied by informative feedback [13, 14]. This skill-acquisition theory highlights the value of feedback within a deliberately tailored, well-structured repetitive practice. This theory may very well have inspired instructors to increase the number of scenarios followed by short informative feedback. According to Kolb’s Experiential Learning Model, experiential learning and reflection can, however, not by itself lead to solid learning. Reflection as well as abstract conceptualisation of the simulated experience is necessary to consolidate learning . Ericsson’s and Kolb’s learning theories both acknowledge and support feedback, although the premises are set within the continuum between skills acquisition and the holistic experiential approach to learning, respectively. Extant literature [16–19] is not unanimous on how this continuum unfolds within the context of ALS, and we need a better understanding of how to obtain a balance between the number of scenarios and the amount of feedback to best utilize allocated time and resources.
Accordingly, the aim of this study was to compare the ability of two groups of ALS naive leaners to stay adherent to the ALS guidelines in their provision of ALS in a simulated setting where one group received 8 simulated resuscitation scenarios with a simulation : feedback ratio of 1 : 1 (8 (15 minutes + 15 minutes) = 240 minutes) and the other group received 12 simulated resuscitation scenarios with a simulation : feedback ratio of 3 : 1 (12 (15 minutes + 5 minutes) = 240 minutes), both in four hours.
We conducted a randomised, controlled, single-blinded intervention study embedded in a voluntary extracurricular ALS course.
Eligible participants were 294 7th semester medical students from two consecutive 7th semesters (163 and 131 students, resp.) from the Faculty of Health Sciences, Aarhus University, Aarhus, Denmark. Participants investigated in the present study were recruited through email advertisements and through face-to-face invitations prior to lectures in connection with recruitment for two previous, individual studies analysing students’ performance in two different semesters: “Compressed versus Real-Time CPR”  and “Peer-Led versus Instructor-Led Feedback” (manuscript in preparation). The two semesters were randomised to either the intervention (8 Sim) group or the control (12 Sim) group. The “real-time group” and the “instructor-led feedback group” who were both trained with real-time CPR and instructor-led feedback served as participants in the 12 Sim group or the 8 Sim group, respectively. Exclusion criteria were prior ALS or other critical care training which participants were screened for by systematic questioning.
The courses were conducted at the SkejSim simulation and skills training facility at Aarhus University Hospital.
As ALS courses in general are time-bound rather than outcome-bound and with the intent to best use allocated time and resources the ALS course was set up as a 1-day course following the ERCs ALS guidelines 2010 . Course schedule and scenarios are outlined in Table 1.
The course consisted of 3 hours of lectures and 4 hours of simulation. Scenarios were conducted in teams of four and with rotating roles. Each participant was assigned the role of team leader in two or three scenarios (depending on allocation) and the role of team member in the remaining scenarios. Both groups were trained during simulation scenarios using real-time, that is, 120 seconds of CPR for rhythm controls/defibrillation in the simulation scenarios as shown beneficial in a previous study .
2.3. The Intervention
All aspects of the course were identical except for the intervention related to the number of simulation scenarios and associated time for feedback run in the 4-hour simulation sessions. The intervention (8 Sim) group went through 8 scenarios (15 minutes) with 15 minutes of feedback (8 (15 + 15) = 240 minutes) compared with 12 scenarios (15 minutes) with 5 minutes of feedback (12 (15 + 5) = 240 minutes) for the control (12 Sim) group. The eight scenarios were identical for the two groups, and the additional four scenarios for the 12 Sim group were similar in contents and length. Each scenario, including its introduction, lasted approximately 15 minutes which left 5 minutes for feedback for the 12 Sim group (simulation : feedback ratio of 3 : 1) and 15 minutes for feedback for the 8 Sim group (simulation : feedback ratio of 1 : 1); that is, the 8 Sim group received thrice as much feedback per scenario as the 12 Sim group in the 4 hours allocated for simulation.
To ensure consistency between the groups, feedback followed the same generic three-phase structure of a reaction, an analysis, and a summary phase [11, 21–24] using a variation of the Pendleton model  for feedback. With 15 minutes used for each simulated scenario, the 12 Sim group had 5 minutes available for feedback sessions, while the 8 Sim group had 15 minutes. The longer time available for the 8 Sim group was used mainly to increase the analytic phase of discussion and for reflection (Figure 1).
Participants were randomised at group level so that the entire participant group from one of the two consecutive semesters was allocated to either the intervention or the control group. This was done to minimise potential instructor bias as no direct comparison was possible due to the time span between control and intervention courses.
2.4. Outcome Measures
Participants were assessed in their role as a team leader in a test 1 week after the course and again 12 weeks after the course. To minimise the influences of test-enhanced learning, no test was performed immediately after the course [28, 29]. The validated ERC Cardiac Arrest Simulation Test (CASTest), developed for the ERC ALS provider course , was used in both retention tests. A resuscitation team of four people was available during both tests to ensure that the setting was the same during the course and the tests. Faculty assistants acted as team members during the tests and were in no other way involved in the course. The team leader was instructed to delegate tasks to the assistants in the same way as during the course except for diagnostic advice, including ECG interpretation.
The sequence of the test scenario consisted of nonshockable rhythm twice (pulseless electrical activity; PEA1 and PEA2), shockable rhythm twice (ventricular fibrillation; VF1 and VF2), and return of spontaneous circulation (ROSC). This particular CASTest consists of a total of 23 assessable actions—for scoring purposes the original pass/fail score has been modified to a zero-to-five-point score equivalent to the modification used for the validation . The maximum possible score is therefore set to 115 and the minimum score to zero. All tests were video-recorded and it was ensured that the quality of the video was sufficient to allow scoring of all items during subsequent rater assessments by three individual raters of whom two were blinded. All raters had received rater training where the 23 assessable actions were addressed and discussed in the assessment of the training videos and where rater scoring was aligned. In the ongoing rating, 10 random recordings were selected and compared to ensure continued coherence amongst the rater assessments.
2.5. Statistical Analysis
Sample size calculation was based on a pilot study () which demonstrated an average score of 91 points (out of 115, standard deviation (SD) = 11) between the intervention group and the control group. Assuming a difference of 10 points, a power of 80%, and a two-tailed α of 5%, 21 participants in each group were required to detect a difference. To compensate for potential dropouts, 64 students were intended to be enrolled for this study. Of the 294 7th semester medical students from two consecutive 7th semesters (163 and 131 students, resp.), 128 were invited to the voluntary extracurricular ALS courses. Of the 128 participants, 64 were randomised to and ultimately enrolled in this study. The remaining 64 course participants were enrolled in a parallel study as illustrated in Figure 2.
To estimate interrater reliability, the intraclass correlation coefficient (ICC) was calculated.
Data were analysed using the statistical software SPSS 23 (IBM SPSS Statistics 220.127.116.11). CASTest results are presented as mean ± standard deviation (SD) and confidence interval (CI) for the mean difference in Table 2. The statistical significance () of the outcome of the 12 Sim group compared to the 8 Sim group was tested by applying two independent samples -tests with equal variances to the three raters’ average scores as data were found to be normally distributed when tested using Kolmogorov-Smirnov’s test (K-S test) of normality.
A paired samples -test was used to compare the CASTest results from the 1-week test and the 12-week test in the 12 Sim group and the 8 Sim group, respectively.
All participants gave written informed consent to participate in the study and to be video-recorded during early and late retention tests. The participants’ anonymity was guaranteed.
The Central Denmark Region Committees on Health Research Ethics and the Danish Data Protection Agency were approached, but both waived their right to approve or dismiss the described study as it was conducted in a nonclinical setting. Due to the low-risk profile of the study and the nature of collected data involved, the study required no formal approval.
A total of 46 of the 64 enrolled participants completed the course and the 1-week test (17 dropouts prior to course, resp., 9 and 8 from each group). Of these, 43 completed both the 1-week and the 12-week test.
ICC = 0.976 (95% CI 0.965–0.984) revealed an excellent correlation between raters as the ICC was above 0.95. The interrater reliability was found to be excellent, and an average rater score was used for the subsequent analysis of data.
There was no statistically significant difference between the test score/outcome of the intervention (8 Sim) group and that of the control (12 Sim) group. In the 1-week test, the 8 Sim group achieved a mean score of 89; the 12 Sim group’s mean score was 91 (). Correspondingly, no statistically significant difference was found between the two groups’ 12-week tests: mean scores (8 Sim) 95 and (12 Sim) 92 (). No differences in the results were seen between the 1-week test and the 12-week test within the control (12 Sim) group (). The intervention (8 Sim) group, on the other hand, showed a statistically significant improvement from the 1-week test to the 12-week test () as shown in Table 2. Figure 3 visually illustrates the CASTest results for both groups.
In the present study, we found no statistically significant difference between the two groups’ test scores. This comparison between the two groups indicates that the higher number of simulated scenarios does not improve performances outcome when participants are tested on their ability to stay adherent to the ALS guidelines. This invites an important question when developing and implementing simulation scenarios: “how much training is needed?” Previous studies on laparoscopic skills and epidural cannulation skills suggest that performance plateaus on learning curves are achieved after 2 to 75 repetitions depending on the complexity of the skill, the learner’s experience, and the desired performance plateau [31, 32]. The results of the present study could be interpreted as suggesting that the participants from the two groups merely achieved the same performance plateau. All scenarios were of equal length, so the major difference, other than the number of scenarios, was the time used for feedback. The control (12 Sim) group had 5 minutes available for feedback sessions and the intervention (8 Sim) group had 15 minutes. In ALS training, a tendency has been observed towards running as many scenarios as possible to ensure multiple repetitions of the resuscitation algorithm in a variety of settings at the expense of time for reflective feedback . Our study suggests that this is not necessarily a better approach because fewer scenarios with longer feedback provide the same results as more scenarios with less feedback. Repetitive scenarios during ALS training with precise performance feedback could be interpreted as deliberate practice. In paediatric resuscitation, Ericsson has inspired a concept called Rapid Cycle Deliberate Practice (RCDP) which Hunt et al. showed improved paediatric resident resuscitation skills in a setting where direct coaching rather than a longer reflective feedback was provided on the same scenario focused on the tasks in the first 5 minutes of paediatric resuscitation . Both of the present training schemes could be seen as being coherent with the thoughts of Ericsson and his Deliberate Practice [13, 14] which highlights the value of deliberate, well-structured practice, guided and evaluated by experts giving feedback about performance. To elicit improvement, feedback, whether it is of short or long duration, needs to be structured and closely tailored to the task performed.
In the present study, the 12 Sim group had 5-minute feedback sessions and the 8 Sim group had 15-minute feedback sessions. Such feedback cannot be compared to RCDP which is deliberately focusing on and used for specific algorithm-related skills which are essential skills that need to be supplemented to ensure not only sufficient skills and knowledge, but also leadership and teamwork in the provision of ALS.
For ALS training to incorporate all the ALS elements and not only algorithm skills, facilitated reflective feedback of some level other that informative feedback or coaching is to be included. The competencies needed in the provision of ALS highlight the value of deliberate, well-structured practice balancing the quality and quantity of feedback and repetitions, without compromising either.
The aim of this study was to compare the ability of two groups of novice leaners to stay adherent to the ALS guidelines in their provision of ALS in a simulated setting where one group received 8 and the other group 12 simulated resuscitation scenarios, both in 4 hours. The results show that learners actually did perform equally well. The retention of learned ALS skills after 12 weeks was expected to be at a level equivalent to that of the 1-week test as previous studies show that a decline of skills on average appears to occur between 6 and 12 months after ALS training . Some studies report that retention skills were retained as far as 14 months after training . However, the present study also showed that the 8 Sim group improved their performance significantly from the 1-week to the 12-week retention test although no additional resuscitation training or clinical exposure to this was applied to any of the two groups. This improvement in the 8 Sim group may be explained by reference to Kolb’s Experiential Learning Model . Hence, these learners were able to reap benefit from the longer feedback provided which may have improved their reflection, making abstract conceptualisation and progressing the experience and experimentation into learned and useable knowledge and skills in the provision of ALS.
Ericsson’s and Kolb’s learning theories are both learner centred and acknowledge the need for feedback and albeit differences, one theory does not necessarily exceed the other. Learning the provision of ALS is set within the continuum between acquisition of skills and holistic experiential learning. Within this continuum of learning multiple variable factors are involved. Knowledge of learners and learner level are essential to ensure training at the right level. Even when learners are novices with no prior ALS training and presumably a homogenous group, differences in learning style and speed are present. This emphasizes the need for the learner centred focus and flexibility when preparing and conducting ALS training.
4.1. Limitations of the Study
A limitation of the present study is that the participants are an ALS naive group of medical students whose characteristics and needs during simulation scenarios and feedback may differ from those of more advanced learners or experts. In future studies, a wider variation across the expertise range and further studies on the relationship between the increased feedback and improved performances allegedly derived from an abstract conceptualisation on a larger cohort and longer retention would be worthwhile studying. And while this study as most ALS courses was time-bound rather than outcome-bound studies on the later with foci on practical mastery results would be of interest.
No statistically significant differences were found between the intervention (8 Sim) group and the control (12 Sim) group.
Our study suggests that a lesser number of repetitive ALS simulation scenarios do not diminish learning when the feedback is equally prolonged to ensure sufficient time for reflection. Present result supports the possibility of being flexible when designing and conducting ALS training as it appears that the quality of feedback is as important as quantity of scenarios when a learner centred focus is present.
Conflict of Interests
The authors have no conflict of interests related to topics or data discussed in this paper.
The authors thank the Tryg Foundation (TrygFonden) (Grant no. 7-11-1189), The Laerdal Foundation for Acute Medicine (Grant no. 30006), SkejSim (Grant no. 1112231324601424372), and The Central Denmark Region Health Scientific Research Fund (Grant no. 1-30-72-114-10) for their financial support. They thank the ERC for granting them permission to use the previous validated CASTest. They wish to thank all the graduates for their participation in this study.
- L. M. Cunningham, A. Mattu, R. E. O'Connor, and W. J. Brady, “Cardiopulmonary resuscitation for cardiac arrest: the importance of uninterrupted chest compressions in cardiac arrest resuscitation,” American Journal of Emergency Medicine, vol. 30, no. 8, pp. 1630–1638, 2012.
- C. Nishiyama, T. Iwami, T. Kawamura et al., “Quality of chest compressions during continuous CPR; comparison between chest compression-only CPR and conventional CPR,” Resuscitation, vol. 81, no. 9, pp. 1152–1155, 2010.
- G. D. Perkins, S. J. Brace, M. Smythe, G. Ong, and S. Gates, “Out-of-hospital cardiac arrest: recent advances in resuscitation and effects on outcome,” Heart, vol. 98, no. 7, pp. 529–535, 2012.
- C. D. Deakin, L. J. Morrison, P. T. Morley et al., “Part 8: advanced life support: 2010 International consensus on cardiopulmonary resuscitation and emergency cardiovascular care science with treatment recommendations,” Resuscitation, vol. 81, no. 1, supplement, pp. e93–e174, 2010.
- R. W. Neumar, C. W. Otto, M. S. Link et al., “Part 8: adult advanced cardiovascular life support: 2010 American Heart Association Guidelines for Cardiopulmonary Resuscitation and Emergency Cardiovascular Care,” Circulation, vol. 122, supplement 3, pp. S729–S767, 2010.
- S. Steen, Q. Liao, L. Pierre, A. Paskevicius, and T. Sjöberg, “The critical importance of minimal delay between chest compressions and subsequent defibrillation: a haemodynamic explanation,” Resuscitation, vol. 58, no. 3, pp. 249–258, 2003.
- A. Pranskunas, P. Dobožinskas, V. Pilvinis et al., “New insights for adult cardiopulmonary resuscitation. Up-coming resuscitation guidelines 2010,” Medicina, vol. 46, no. 9, pp. 571–580, 2010.
- J. Summers, “Simulation-based military training: an engineering approach to better addressing competing environmental, fiscal, and security concerns,” Journal of the Washington Academy of Sciences, vol. 98, no. 1, pp. 9–30, 2012.
- K. B. Krogh, C. B. Høyer, D. Østergaard, and B. Eika, “Time matters—realism in resuscitation training,” Resuscitation, vol. 85, no. 8, pp. 1093–1098, 2014.
- W. C. McGaghie, S. B. Issenberg, E. R. Petrusa, and R. J. Scalese, “A critical review of simulation-based medical education research: 2003–2009,” Medical Education, vol. 44, no. 1, pp. 50–63, 2010.
- R. M. Fanning and D. M. Gaba, “The role of debriefing in simulation-based learning,” Simulation in Healthcare, vol. 2, no. 2, pp. 115–125, 2007.
- S. B. Issenberg, W. W. C. McGaghie, E. R. Petrusa, D. Lee Gordon, and R. J. Scalese, “Features and uses of high-fidelity medical simulations that lead to effective learning: a BEME systematic review,” Medical Teacher, vol. 27, no. 1, pp. 10–28, 2005.
- K. A. Ericsson, “Deliberate practice and the acquisition and maintenance of expert performance in medicine and related domains,” Academic Medicine, vol. 79, no. 10, pp. S70–S81, 2004.
- K. A. Ericsson, R. T. Krampe, and C. Tesch-Römer, “The role of deliberate practice in the acquisition of expert performance,” Psychological Review, vol. 100, no. 3, pp. 363–406, 1993.
- D. Kolb, Experiential Learning: Experience As the Source of Learning and Development, Pearson Education, Englewood Cliffs, NJ, USA, 1984.
- E. Fernandez Castelao, S. G. Russo, M. Riethmüller, and M. Boos, “Effects of team coordination during cardiopulmonary resuscitation: a systematic review of the literature,” Journal of Critical Care, vol. 28, no. 4, pp. 504–521, 2013.
- S. Boet, M. D. Bould, B. Sharma et al., “Within-team debriefing versus instructor-led debriefing for simulation-based education: a randomized controlled trial,” Annals of Surgery, vol. 258, no. 1, pp. 53–58, 2013.
- L. A. Devine, J. Donkers, R. Brydges, V. Perelman, R. B. Cavalcanti, and S. B. Issenberg, “An equivalence trial comparing instructor-regulated with directed self-regulated mastery learning of advanced cardiac life support skills,” Simulation in Healthcare, vol. 10, no. 4, pp. 202–209, 2015.
- S. Boet, M. D. Bould, H. R. Bruppacher, F. Desjardins, D. B. Chandra, and V. N. Naik, “Looking in the mirror: self-debriefing versus instructor debriefing for simulated crises,” Critical Care Medicine, vol. 39, no. 6, pp. 1377–1381, 2011.
- C. D. Deakin, J. P. Nolan, J. Soar et al., “European resuscitation council guidelines for resuscitation 2010 section 4. Adult advanced life support,” Resuscitation, vol. 81, no. 10, pp. 1305–1352, 2010.
- I. Motola, L. A. Devine, H. S. Chung, J. E. Sullivan, and S. B. Issenberg, “Simulation in healthcare education: a best evidence practical guide. AMEE Guide No. 82,” Medical Teacher, vol. 35, no. 10, pp. e1511–e1530, 2013.
- A. Cheng, D. L. Rodgers, É. van der Jagt, W. Eppich, and J. O'Donnell, “Evolution of the Pediatric Advanced Life Support course: enhanced learning with a new debriefing tool and Web-based module for Pediatric Advanced Life Support instructors,” Pediatric Critical Care Medicine, vol. 13, no. 5, pp. 589–595, 2012.
- E. Salas, C. Klein, H. King et al., “Debriefing medical teams: 12 evidence-based best practices and tips,” Joint Commission Journal on Quality and Patient Safety, vol. 34, no. 9, pp. 518–527, 2008.
- J. W. Rudolph, R. Simon, P. Rivard, R. L. Dufresne, and D. B. Raemer, “Debriefing with good judgment: combining rigorous feedback with genuine inquiry,” Anesthesiology Clinics, vol. 25, no. 2, pp. 361–376, 2007.
- D. Pendleton, T. Schofield, P. Tate, and P. Havelock, The Consultation: An Approach to Learning and Teaching, Oxford General Practice, Oxford University Press, 1984.
- K. F. Schulz and D. A. Grimes, “Blinding in randomised trials: hiding who got what,” The Lancet, vol. 359, no. 9307, pp. 696–700, 2002.
- K. F. Schulz, D. G. Altman, and D. Moher, “CONSORT 2010 statement: updated guidelines for reporting parallel group randomised trials,” Annals of Internal Medicine, vol. 152, no. 11, pp. 698–702, 2010.
- D. P. Larsen, A. C. Butler, and H. L. Roediger, “Test-enhanced learning in medical education,” Medical Education, vol. 42, no. 10, pp. 959–966, 2008.
- H. L. Roediger III and J. D. Karpicke, “Test-enhanced learning: taking memory tests improves long-term retention,” Psychological Science, vol. 17, no. 3, pp. 249–255, 2006.
- C. Ringsted, F. Lippert, R. Hesselfeldt et al., “Assessment of Advanced Life Support competence when combining different test methods—reliability and validity,” Resuscitation, vol. 75, no. 1, pp. 153–160, 2007.
- D. J. Scott, W. N. Young, S. T. Tesfay, W. H. Frawley, R. V. Rege, and D. B. Jones, “Laparoscopic skills training,” The American Journal of Surgery, vol. 182, no. 2, pp. 137–142, 2001.
- W. C. Brunner, J. R. Korndorffer Jr., R. Sierra et al., “Laparoscopic virtual reality training: are 30 repetitions enough?” Journal of Surgical Research, vol. 122, no. 2, pp. 150–156, 2004.
- E. A. Hunt, J. M. Duval-Arnould, K. L. Nelson-McMillan et al., “Pediatric resident resuscitation skills improve after “Rapid Cycle Deliberate Practice” training,” Resuscitation, vol. 85, no. 7, pp. 945–951, 2014.
- C.-W. Yang, Z.-S. Yen, J. E. McGowan et al., “A systematic review of retention of adult advanced life support knowledge and skills in healthcare providers,” Resuscitation, vol. 83, no. 9, pp. 1055–1060, 2012.
- D. B. Wayne, V. J. Siddall, J. Butter et al., “A longitudinal study of internal medicine residents' retention of advanced cardiac life support skills,” Academic Medicine, vol. 81, no. 10, supplement, pp. S9–S12, 2006.
Copyright © 2015 Kristian Krogh et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.