Review Article | Open Access
Kivanc Atesok, Shepard Hurwitz, Donald D. Anderson, Richard Satava, Geb W. Thomas, Ted Tufescu, Michael J. Heffernan, Efstathios Papavassiliou, Steven Theiss, J. Lawrence Marsh, "Advancing Simulation-Based Orthopaedic Surgical Skills Training: An Analysis of the Challenges to Implementation", Advances in Orthopedics, vol. 2019, Article ID 2586034, 7 pages, 2019. https://doi.org/10.1155/2019/2586034
Advancing Simulation-Based Orthopaedic Surgical Skills Training: An Analysis of the Challenges to Implementation
Simulation-based surgical skills training is recognized as a valuable method to improve trainees’ performance and broadly perceived as essential for the establishment of a comprehensive curriculum in surgical education. However, there needs to be improvement in several areas for meaningful integration of simulation into surgical education. The purpose of this focused review is to summarize the obstacles to a comprehensive integration of simulation-based surgical skills training into surgical education and board certification and suggest potential solutions for those obstacles. First and foremost, validated simulators need to be rigorously assessed to ensure their feasibility and cost-effectiveness. All simulation-based courses should include clear objectives and outcome measures (with metrics) for the skills to be practiced by trainees. Furthermore, these courses should address a wide range of issues, including assessment of trainees’ problem-solving and decision-making abilities and remediation of poor performance. Finally, which simulation-based surgical skills courses will become a standard part of the curriculum across training programs and which will be of value in board certification should be precisely defined. Sufficient progress in these areas will prevent excessive development of training and assessment tools with duplicative effort and large variability in quality.
Simulation-based surgical skills training is widely recognized as a valuable method for improving trainees’ performance and as an essential part of a comprehensive curriculum in surgical education [1–6]. Despite tremendous efforts to integrate simulation into orthopaedic surgical education, there is a continued need for simulation-based education to be a more substantial part of orthopaedic curricula. Rapidly growing orthopaedic techniques along with increasing subspecialization necessitate the wide and efficacious adaptation of simulators as basic educational tools in orthopaedic curricula. Evidence suggests that surgical simulation has positive effects on improving trainees’ performance in the operating room (OR) [7–10]. As a result, there is currently a consensus among surgical specialties including the orthopaedic community that ideally the development of trainees’ surgical skills should commence in a simulated training environment prior to progression to the OR. The purpose of the current review is to summarize the obstacles to, and provide recommendations for, a comprehensive integration of simulation-based surgical skills training into surgical education and board certification.
2. Overcoming Barriers to the Integration of Simulation-Based Surgical Skills Training into Surgical Education
2.1. Validation of Simulation-Based Surgical Skills Training
Although trainees from all levels—residents, fellows, and practicing surgeons—appreciate the introduction of simulation into surgical education, validation of simulation-based surgical skills training is the most important area that still needs further improvement. Simulators and curricula must be rigorously assessed and validated so that training can be proven to be valuable and performance can be reliably assessed.
Simulated surgical skills training should pass multiple validity tests prior to routine use in surgical curricula and competency assessment. Standard validity tests include face, content, construct, and concurrent validity. In the following paragraphs, we will define and elucidate the subtle differences between these key validity tests.
Face validity refers to a simulator’s relevance as it appears to participants. In other words, a simulator platform can be said to have face validity if it “looks like” it correctly simulates the intended task/situation. Face validity is usually assessed subjectively via participants’ responses to postsimulation questionnaires. Arikatla et al. assessed the face validity of a virtual navigation task trainer developed to simulate laparoscopic surgery tasks. After performing the tasks, the participants were asked to rate the simulator’s face validity from 1 (not realistic/useful) to 5 (very realistic/useful) on a 10-item questionnaire. The subjects rated the simulator as highly useful by responding to 90% (9 out of 10) of the questions with a score of three or above .
Content validity refers to the extent to which a simulator measures the relevant aspects of the surgical task under study. For example, a simulator may lack content validity if it only assesses the accuracy of the reduction and hardware used in a simulated fracture fixation but fails to account for the correctness in the stepwise performance of the procedure. Alsalamah et al. studied the content validity of a virtual reality simulator reflecting real transvaginal ultrasound (TVUS) scanning . The content validity assessment statements aimed to include all the aspects of TVUS scanning, such as the simulator’s ability to test normal gynaecological anatomy and early pregnancy structures, and also the simulator’s realism in providing measurements of endometrial thickness, ovaries, and crown-rump length. Participants scored the simulator on a visual analog scale based on their subjective perceptions of the simulator’s accuracy in assessing the items included in the content validity scoring list. The median scores demonstrated strong agreement with the content validity statements.
Construct validity defines the degree to which the simulator measures the specific surgical skills that it was designed for. Construct validity can be described as the appropriateness of inferences made on the basis of test scores. In surgical simulation research, experience levels of subjects testing on a simulator are often used to asses construct validity. For example, if a simulator correctly detects expected variations in the proficiency levels between the expert and novice subjects—in other words, if it correctly identifies quantifiable aspects of a surgical skill—construct validity has been achieved. Lopez et al. studied the construct validity of an arthroscopy simulator in a group of participants that included medical students, residents, and attending physicians . The arthroscopic surgical simulator objectively demonstrated that the attending physicians and senior residents performed at a higher level than the junior residents and novice medical students.
Concurrent validity refers to the correspondence between trainees’ performance on a tested simulator and a previously validated measure. Researchers often assess the concurrent validity of surgical simulators by comparing participants’ simulator scores with their objective structured assessment of technical skill (OSATS) scores. However, comparison with another previously validated simulator can also be used to endorse concurrent validity. In a study including three groups of participants with different levels of expertise (novice, intermediate, and expert) in pedicle screw insertion, Fürst et al.  assessed the face, content, construct, and concurrent validity of a novel simulator for minimally invasive spine surgery . The authors used questionnaires with a 5-point Likert scale to assess face and content validity. They evaluated the construct validity using a simulation-based performance score that they calculated for pedicle screw insertion and compared among the three groups. To establish concurrent validity, the performance of each participant was assessed by a specialist using a task-specific checklist and OSATS, and the association between the specialist rating and the simulation-based performance score was evaluated. Their results demonstrated significantly better simulation-based performance scores in the expert group (construct validity) compared with the novices () and intermediates (). The association between the specialist’s ratings and the simulation-based performance score (concurrent validity) was strong (R = 0.86, ).
Although it is essential to prove face, content, construct, and concurrent validity for a simulation-based training platform to be considered for integration into educational curricula, transferability of the skills practiced on a simulator to the OR to perform on patients should also be achieved. In a randomized study, Howells et al. investigated the effect of laboratory-based simulator training on surgical trainees’ ability to perform diagnostic arthroscopy of the knee in the OR . The simulator-trained group performed significantly better than the control, demonstrating the transferability of psychomotor skills from simulator training to arthroscopy in the OR.
Arguably, the correlation between performance on the simulator with OR performance on patients provides the most convincing evidence to move forward. The reliability of both the simulated measurement of skills and the measures obtained from actual practice along with good-to-excellent correlation between these two assessments is an absolute necessity for a simulation-based surgical skills course to become part of a standardized educational curriculum. Unfortunately, demonstrating transferability of surgical skills requires fairly sophisticated research methodology, a sufficient number of learners to have the power to demonstrate important differences, and patient consent for the operating room portion of the trial. These issues provide substantial challenges to this type of validation of a simulation course.
2.2. Toward Evaluating Surgical Skills in the OR
Evaluating learners’ surgical skills in the OR environment after simulation-based training in laboratories requires objective and standard measurement techniques that are also sensitive enough to differentiate various levels of expertise . A fundamental challenge preventing the movement toward competency-based education is establishing objective criteria for when a resident is ready to advance to the next level. It is possible that the main reason for not being able to establish such criteria is the lack of standard, objective, reliable, and sensitive surgical skill measurement methods. Szasz et al. reviewed 6,814 publications and defined 85 surgical simulator studies that assessed trainees’ technical competence . The authors noted that “Very few studies identify standard-setting approaches that differentiate competent versus noncompetent performers; subsequently, this has been identified as an area with great research potential.”
Specific surgical skills for various types of procedures can be hard to define, making it even more difficult to precisely measure them. For example, an individual skill such as navigating a surgical wire into a specific location in a bone is a complex interplay of task-specific knowledge, general knowledge of the anatomy and tools, hand-eye coordination, team communication, dexterity, self-control, experience, and a bit of luck. Hence, defining the essential elements (tasks) of the procedure-specific and surgical skills to be measured is critically important for accurate assessments of competency in the OR.
By definition, all procedures are comprised of specific tasks, and all tasks are comprised of specific skills. The most common method is to perform “task deconstruction,” which accurately defines each task (e.g., anastomosis) and breaks that task into each of its elemental skills (e.g., tissue alignment, needle driving, suturing, and knot tying). It is true that subspecialties need to develop skills that are unique to their specialty; however, basic skills such as suturing, knot tying, developing dissection planes, and anastomosis are truly applicable to all orthopaedic (and other surgical) subspecialities as well.
One other, more global approach to assessing surgical skills in a clinical setting is to systematically observe the objective elements of a surgeon’s behavior. Currently, surgical task-specific checklists and OSATS are two commonly used techniques for measuring surgical skills; however, studies demonstrated that OSATS scores do not always correlate well with the outcomes [18, 19]. In a study of simulated distal radius fracture fixation, Putnam et al. found that the participants’ OSATS scores had no correlation with the actual mechanical strength of the fixation they performed . Therefore, to be able to tie simulation-based surgical skills training to OR performance and surgical results, meticulously defined procedure-specific tasks should be assessed using standard measurement protocols that combine more than one objective technique, such as OSATS, motion analysis, and direct objective metrics, such as the accuracy of reduction, time to skill completion, and strength of a fracture fixation construct (Figure 1) .
2.3. Proficiency-Based Progression for Surgical Skills Training
Proficiency-based progression (PBP) is the next-generation approach to overcoming the challenge of defining the measurable criteria for technical performance proficiency in training with simulators. The outcome measures of technical skill performance in PBP are quantitative metrics, which can unambiguously quantify the performance of the learner. In this approach, benchmarks are defined by the performance scores of the experts who undergo the same course [6, 20]. Training with the simulator continues until students meet the benchmark scores for two consecutive trials, regardless of the number of trials or time it requires to meet proficiency benchmarks. PBP method provides quantitative evidence that the student is proficient: evidence-based medicine requires evidence-based education.
In order to build a standard nationwide curriculum for various specialties, benchmarks using multi-institutional experts can be developed for national-level courses. There are such courses accepted by the corresponding boards and recognized as national “standards” because they meet the requirements of the certifying boards of the specialty. Some examples include advanced trauma and life support (ATLS), fundamentals of laparoscopic surgery (FLS), and fundamentals of endoscopic surgery (FES). Orthopaedic surgery does not yet have a nationwide specialty course that the American Board of Orthopaedic Surgery (ABOS) recognizes as standard.
2.4. Addressing Problem Solving and Surgical Adversity
Most of the effort in teaching learners about surgery is spent on the knowledge base needed to arrive at an accurate diagnosis, applying the guidelines from the literature regarding the indications for a surgical procedure, and the experiential learning of the mechanics of conducting the procedure on a human. However, problem-solving and decision-making are not explicitly taught in orthopaedic surgery education . Learning about problem-solving in surgery is often a matter of having to deal with a problem as it arises during a procedure or in the perioperative timeframe .
Although much of what is required for a successful surgical result comprises the knowledge and skills to perform a routine procedure, the ability to deal with adversity is another skill that can be taught during residency. Theoretically, if trainees can be taught to anticipate and handle certain potential problems, solving such problems in real-life situations can become a learned behavior. In addition to using technical skills in such challenging situations, nontechnical skills such as communication, situational awareness, ability to handle high stress, and decision-making can be equally important . Simulators that are equipped with and programmed for certain intraoperative challenges can be beneficial in teaching problem-solving and decision-making skills. Currently, there is a need for high-fidelity models that can add intraoperative challenges to the routine performance of a procedure, such as excessive bleeding, dural tears, and iatrogenic fractures.
2.5. Remediation of Poor Performance
Another important aspect of simulation-based training is the remediation of poor surgical skills. Those trainees who perform poorly in the laboratory setting should be entered into a remediation program that includes more time and supervision in the skills lab and personal input from supervising faculty. The orthopaedic literature lacks studies that outline the remediation of struggling trainees. In a report summarizing the Council of Emergency Medicine Residency Directors’ (CORD) recommendations for resident remediation, Katz et al. indicated that “Early identification of poor performance is key to successful remediation .” In the context of simulation-based orthopaedic skills training, early detection of poor performance can only be achieved by using validated simulators and objective, reliable, and sensitive measurement techniques. Gas et al.  tested general surgery residents’ knowledge and surgical skills in an objective structured clinical examination-style assessment. Participants who scored poorly at any given task were required to remediate those tasks by practicing the simulated stations with staff surgeons and by using online instructional videos. Residents that required remediation were evaluated again in person by an instructor or sent in self-made videos of their performance that was then graded by an instructor.
2.6. Choosing the Right Simulator
There are three important criteria that educators should consider when choosing a simulator for surgical skills training: fidelity, cost, and feasibility.
Fidelity is defined as “the degree of association, resemblance, or correspondence with the original; exactness” . In other words, fidelity does not define the “prettiness” or “genuineness” of the images or the model. In simulation for technical skills, which are the psychomotor skills and tasks that a learner must acquire, fidelity is about the motions that the learner must perform. The hand motions in the simulation should match the complexity of the motions in the corresponding real procedure. Although fidelity is an important criterion in choosing a simulator in health care, there is not enough literature to prove that high-fidelity simulators are better in terms of achieving skill proficiency or transfer of learned skills to the OR [27–30]. In a prospective randomized crossover study, Tan et al.  investigated whether there were any differences in the learning outcomes of participants who had trained to proficiency on low- or high-filaparoscopic surgical simulators. The participants were randomized to either high-fidelity (LapSim, Surgical Science) or low-fidelity (FLS, SAGES) laparoscopic simulators and trained to proficiency in a defined number of tasks. They then crossed over to the other fidelity simulator and were tested. The results demonstrated similar increases in participant scores from the baseline for both the high- and low-fidelity groups. However, FLS-trained participants demonstrated a greater ability to translate their skills to successfully complete LapSim tasks. The authors concluded “… the FLS simulator is superior to the LapSim when teaching basic laparoscopic skills to participants.” In a review article, Norman et al.  compared learning from high-fidelity simulation with learning from low-fidelity simulation based on measures of clinical performance. Although both high- and low-fidelity simulation learning resulted in consistent improvements in performance in comparison with no-simulation control groups, nearly all the studies showed no significant advantage of high-fidelity over low-fidelity simulation, with average differences ranging from 1 to 2%.
Complex high-fidelity simulators may cost up to 80,000–90,000 United States Dollars (USD). With further expenses such as maintenance and software upgrades, simulator-based training can become very costly. Although the ultimate goal of simulation-based training is to enhance learning, if the cost associated with implementing effective simulators is prohibitively high, it may not be a viable option . Despite the consistent focus on the high costs of simulation, the literature does not include sufficient research regarding simulators’ cost-effectiveness . Berg et al. demonstrated that startup costs and operational expenses of a surgical skills laboratory decreased from 1,151 to 982 USD per resident over a three-year period . It is sensible to assume that the initial expenses of building a simulation-based surgical skills laboratory will be a lot higher compared with the expenses of an established laboratory. Moreover, surgical skills training before residents enter the OR may bring additional savings of increased OR turnover in teaching hospitals.
Choosing the best simulator also depends upon the procedure or specific task that needs to be trained. Simulating full procedures is usually more difficult because the surrogates used for full procedure simulations, such as cadavers and live animal models, are not very feasible options due to procurement, cost, and logistics. Partial manikins with replaceable organs or tissues can be used repeatedly and are more feasible and cost-effective for training of specific simple tasks such as central line placements . Virtual reality (VR) simulators, such as laparoscopic or arthroscopic simulators, have been developed with the addition of haptic properties and replica legs for increased fidelity. These VR simulators are very feasible due to their easy accessibility for trainees, the small space they occupy, the possibility to train with them repeatedly, and their resemblance to real laparoscopic or arthroscopic procedures. However, the cost is still considered as a drawback, especially for high-fidelity haptic arthroscopy simulators . When considering the feasibility of implementing simulator-based training into a surgical educational curriculum, time spent in education and thus away from clinical practice should also be taken into account. Although more simulation-based surgical skills training may result in better learning for residents, conducting lab-based training for extended periods, especially early on in a residency, might not be feasible. As a generalization, the measure of value for a simulator should be as follows: “the fidelity of a simulator should match the complexity of the task or procedure to be trained and/or assessed.” Simple models should be used for basic skills training and more sophisticated computer-based simulators for complex procedures.
3. Skills Assessment for Board Certification: What Are the Issues and Areas That Need to Be Improved?
Certifying boards’ mission is to set standards for education and practice. Although many competencies are required to meet these standards, for surgical boards, technical skills in performing procedures is a particularly important competency. Since assessing technical skills in the OR is challenging, simulation should play an important role in assessing a skill during the training and certification process. Surgical boards have the opportunity to advance simulation by requiring its use during the steps toward initial and ongoing certification. Some surgical boards have taken initiatives to incorporate simulation into their certification process, but others have encountered barriers that have prevented the wide adoption of board-required simulation-based surgical skills training. In addition to the conservative attitude of opposing any new technology before there are sufficient evidence and validation to prove its efficacy, value, and utility, there is also the natural resistance to change, and the threat and fear of those who risk adverse consequences from the new change. From a practical aspect, there is the issue below of difficulty of transfer from simulation to the OR in order to provide the unequivocal evidence necessary to convince the boards of the value of simulation. Although the ABOS has required simulation-based training as part of residency in postgraduate year one since 2013, the standards of such training and of residents’ performance have not been well defined.
One of the challenges for boards implementing simulation-based skills training is the fundamental need for both the simulation course and associated assessments to be rigorously validated prior to implementation. Unfortunately, there is not sufficient high-level evidence in orthopaedic surgery to prove that simulated skills training courses improve performance in the OR or that the associated assessments are reliable and valid. Another important point is that the surgical outcome is not completely determined by the success of the surgical procedure; rather, there are many confounding factors such as whether the surgeon has chosen the right treatment (or procedure) for the patient, the preoperative health of the patient, appropriate prepping before entering the OR, postoperative nonsurgical complications, and patient compliance with postoperative instructions. However, the closest evidence would be, subsequent to training to proficiency on a simulator, if the learner is able to demonstrate proficiency in an animal model that is nearly identical with a given patient procedure. The ABOS has recently sponsored grants through the Orthopaedic Research and Education Foundation to assist investigators in obtaining high‐level evidence on certain simulation-based surgical skills training courses. These grants hold promise for the future of simulation in orthopaedic surgery.
Unfortunately, challenges in simulation-based skills training in orthopaedic surgery are not limited to a lack of sufficient evidence. Even when high-level scientific evidence proves the value of simulation in orthopaedic surgery, implementing simulation-based skills training courses across residency programs can be very expensive. Furthermore, a standard simulation-based skills training course and the associated assessments the board requires must be available locally or at regional testing centers—the logistics of building such a network can be extremely challenging.
Simulation-based skills training is becoming increasingly integrated into surgical education as an important teaching method across surgical specialties. There are challenges waiting to be addressed prior to further implementation of simulation in residency curriculum and board certification. These challenges can be summarized as the validation of simulation-based courses; proving that simulation-based surgical training improves trainees’ performance in the OR; addressing the issues of expense, fidelity, feasibility, and standardizing assessment techniques; and boards’ involvement in implementing simulation as a requirement for board eligibility or certification. Advancing simulation-based skills training will accelerate after the recognition and systematic solution of the challenges outlined in this article.
Conflicts of Interest
The authors declare that they have no conflicts of interest regarding the publication of this paper.
- K. Atesok, J. D. Mabrey, L. M. Jazrawi, and K. A. Egol, “Surgical simulation in orthopaedic skills training,” Journal of the American Academy of Orthopaedic Surgeons, vol. 20, no. 7, pp. 410–422, 2012.
- M. D. Karam, B. Westerlind, D. D. Anderson, J. L. Marsh, and UI Orthopaedic Surgical Skills Training Committee Corresponding, “Development of an orthopaedic surgical skills curriculum for post-graduate year one resident learners—the University of Iowa experience,” Iowa Orthopedic Journal, vol. 33, pp. 178–184, 2013.
- A. G. Gallagher, N. E. Seymour, J.-A. Jordan-Black, B. P. Bunting, K. McGlade, and R. M. Satava, “Prospective, randomized assessment of transfer of training (ToT) and transfer effectiveness ratio (TER) of virtual reality simulation training for laparoscopic skill acquisition,” Annals of Surgery, vol. 257, no. 6, pp. 1025–1031, 2013.
- N. N. Khamis, R. M. Satava, S. A. Alnassar, and D. E. Kern, “A stepwise model for simulation-based curriculum development for clinical skills, a modification of the six-step approach,” Surgical Endoscopy, vol. 30, no. 1, pp. 279–287, 2016 Jan.
- G. W. Thomas, B. D. Johns, J. L. Marsh, and D. D. Anderson, “A review of the role of simulation in developing and assessing orthopaedic surgical skills,” Iowa Orthopedic Journal, vol. 34, pp. 181–189, 2014.
- K. Atesok, P. MacDonald, J. Leiter et al., “Orthopaedic education in the era of surgical simulation: still at the crawling stage,” World Journal of Orthopedics, vol. 8, no. 4, pp. 290–294, 2017.
- B. A. Butler, C. D. Lawton, J. Burgess, E. S. Balderama, K. A. Barsness, and J. F. Sarwark, “Simulation-based educational module improves intern and medical student performance of closed reduction and percutaneous pinning of pediatric supracondylar humeral fractures,” The Journal of Bone and Joint Surgery, vol. 99, no. 23, p. e128, 2017.
- H. Maertens, R. Aggarwal, N. Moreels, F. Vermassen, and I. Van Herzeele, “A proficiency based stepwise endovascular curricular training (PROSPECT) program enhances operative performance in real life: a randomised controlled trial,” European Journal of Vascular and Endovascular Surgery, vol. 54, no. 3, pp. 387–396, 2017.
- A. S. S. Thomsen, D. Bach-Holm, H. Kjærbo et al., “Operating room performance improves after proficiency-based virtual reality cataract surgery training,” Ophthalmology, vol. 124, no. 4, pp. 524–531, 2017.
- Y. Kurashima, L. S. Feldman, P. A. Kaneva et al., “Simulation-based training improves the operative performance of totally extraperitoneal (TEP) laparoscopic inguinal hernia repair: a prospective randomized controlled trial,” Surgical Endoscopy, vol. 28, no. 3, pp. 783–788, 2014.
- V. Arikatla, S. Horvath, Y. Fu et al., “Development and face validation of a virtual camera navigation task trainer,” Surgical Endoscopy, vol. 33, no. 6, Article ID 30324462, 2018.
- A. Alsalamah, R. Campo, V. Tanos et al., “Face and content validity of the virtual reality simulator “ScanTrainer®”,” Gynecological Surgery, vol. 14, no. 1, p. 18, 2017.
- G. Lopez, D. F. Martin, R. Wright et al., “Construct validity for a cost-effective arthroscopic surgery simulator for resident education,” Journal of the American Academy of Orthopaedic Surgeons, vol. 24, no. 12, pp. 886–894, 2016.
- D. Fürst, M. Hollensteiner, S. Gabauer et al., “Transpedicular approach on a novel spine simulator: a validation study,” Journal of Surgical Education, vol. 75, no. 4, pp. 1127–1134, 2018.
- N. R. Howells, H. S. Gill, A. J. Carr, A. J. Price, and J. L. Rees, “Transferring simulated arthroscopic skills to the operating theatre,” The Journal of Bone and Joint Surgery. British Volume, vol. 90-B, no. 4, pp. 494–499, 2008.
- K. Atesok, R. M. Satava, J. L. Marsh, and S. R. Hurwitz, “Measuring surgical skills in simulation-based training,” Journal of the American Academy of Orthopaedic Surgeons, vol. 25, no. 10, pp. 665–672, 2017.
- P. Szasz, M. Louridas, K. A. Harris, R. Aggarwal, and T. P. Grantcharov, “Assessing technical competence in surgical trainees,” Annals of Surgery, vol. 261, no. 6, pp. 1046–1055, 2015.
- D. D. Anderson, S. Long, G. W. Thomas, M. D. Putnam, J. E. Bechtold, and M. D. Karam, “Objective structured assessments of technical skills (OSATS) does not assess the quality of the surgical result effectively,” Clinical Orthopaedics and Related Research®, vol. 474, no. 4, pp. 874–881, 2016.
- M. D. Putnam, E. Kinnucan, J. E. Adams, A. E. Van Heest, D. J. Nuckley, and J. Shanedling, “On orthopedic surgical skill prediction-the limited value of traditional testing,” Journal of Surgical Education, vol. 72, no. 3, pp. 458–470, 2015.
- A. G. Gallagher, E. M. Ritter, H. Champion et al., “Virtual reality simulation for the operating room,” Annals of Surgery, vol. 241, no. 2, pp. 364–372, 2005.
- M. D. Timberlake, D. Stefanidis, and A. K. Gardner, “Examining the impact of surgical coaching on trainee physiologic response and basic skill acquisition,” Surgical Endoscopy, vol. 32, no. 10, pp. 4183–4190, 2018.
- C. Braddock III, P. L. Hudak, J. J. Feldman, S. Bereknyei, R. M. Frankel, and W. Levinson, ““Surgery is certainly one good option”: quality and time-efficiency of informed decision-making in surgery,” The Journal of Bone and Joint Surgery-American Volume, vol. 90, no. 9, pp. 1830–1838, 2008.
- O. Brunckhorst, M. S. Khan, P. Dasgupta, and K. Ahmed, “Nontechnical skill training and the use of scenarios in modern surgical education,” Current Opinion in Urology, vol. 27, no. 4, pp. 330–336, 2017.
- E. D. Katz, R. Dahms, A. T. Sadosty, S. A. Stahmer, and D. Goyal, “Guiding principles for resident remediation: recommendations of the CORD remediation task force,” Academic Emergency Medicine, vol. 17, no. 2, pp. S95–S103, 2010.
- B. L. Gas, E. H. Buckarma, M. Mohan, T. K. Pandian, and D. R. Farley, “Objective assessment of general surgery residents followed by remediation,” Journal of Surgical Education, vol. 73, no. 6, pp. e71–e76, 2016.
- Oxford University Press, The Oxford English Dictionary, Oxford University Press, Oxford, UK, 2018, http://www.oed.com.
- S. C. Tan, N. Marlow, J. Field et al., “A randomized crossover trial examining low- versus high-fidelity simulation in basic laparoscopic skills training,” Surgical Endoscopy, vol. 26, no. 11, pp. 3207–3214, 2012.
- G. Norman, K. Dore, and L. Grierson, “The minimal relationship between simulation fidelity and transfer of learning,” Medical Education, vol. 46, no. 7, pp. 636–647, 2012.
- J. Kim, J. H. Park, and S. Shin, “Effectiveness of simulation-based nursing education depending on fidelity: a meta-analysis,” BMC Medical Education, vol. 16, Article ID 27215280, p. 152, 2016.
- D. Hart, J. Nelson, J. Moore, E. Gross, A. Oni, and J. Miner, “Shoulder dystocia delivery by emergency medicine residents: a high-fidelity versus a novel low-fidelity simulation model-A pilot study,” AEM Education and Training, vol. 1, no. 4, pp. 357–362, 2017.
- W. Isaranuwatchai, R. Brydges, H. Carnahan, D. Backstein, and A. Dubrowski, “Comparing the cost-effectiveness of simulation modalities: a case study of peripheral intravenous catheterization training,” Advances in Health Sciences Education, vol. 19, no. 2, pp. 219–232, 2014.
- D. A. Berg, R. E. Milner, C. A. Fisher, A. J. Goldberg, D. T. Dempsey, and H. Grewal, “A cost-effective approach to establishing a surgical skills laboratory,” Surgery, vol. 142, no. 5, pp. 712–721, 2007.
- A. A. Alsaad, V. Y. Bhide, J. L. Moss Jr., S. M. Silvers, M. M. Johnson, and M. J. Maniaci, “Central line proficiency test outcomes after simulation training versus traditional training to competence,” Annals of the American Thoracic Society, vol. 14, no. 4, pp. 550–554, 2017.
Copyright © 2019 Kivanc Atesok et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.