Table of Contents Author Guidelines Submit a Manuscript
The Scientific World Journal
Volume 2014 (2014), Article ID 507076, 10 pages
http://dx.doi.org/10.1155/2014/507076
Research Article

Validation of a Novel Virtual Reality Simulator for Robotic Surgery

1Division of Women and Baby, Department of Reproductive Medicine and Gynaecology, University Medical Centre Utrecht, P.O. Box 85500, Room F05-126, 3508 GA Utrecht, The Netherlands
2Department of Obstetrics & Gynaecology, Skåne University Hospital, Tornavagen 10, 221 85 Lund, Sweden
3Department of Surgery, Skåne University Hospital, Tornavagen 10, 221 85 Lund, Sweden
4Lund Clinical Skills Center, Skåne University Hospital, Barngatan 2 B, 221 85 Lund, Sweden
5Department of Surgery, Academic Medical Centre, P.O. Box 22660, 1100 DD Amsterdam, The Netherlands

Received 11 August 2013; Accepted 13 November 2013; Published 30 January 2014

Academic Editors: P. Chien and T. Levy

Copyright © 2014 Henk W. R. Schreuder et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Abstract

Objective. With the increase in robotic-assisted laparoscopic surgery there is a concomitant rising demand for training methods. The objective was to establish face and construct validity of a novel virtual reality simulator (dV-Trainer, Mimic Technologies, Seattle, WA) for the use in training of robot-assisted surgery. Methods. A comparative cohort study was performed. Participants () were divided into three groups according to their robotic experience. To determine construct validity, participants performed three different exercises twice. Performance parameters were measured. To determine face validity, participants filled in a questionnaire after completion of the exercises. Results. Experts outperformed novices in most of the measured parameters. The most discriminative parameters were “time to complete” and “economy of motion” (). The training capacity of the simulator was rated 4.6 ± 0.5 SD on a 5-point Likert scale. The realism of the simulator in general, visual graphics, movements of instruments, interaction with objects, and the depth perception were all rated as being realistic. The simulator is considered to be a very useful training tool for residents and medical specialist starting with robotic surgery. Conclusions. Face and construct validity for the dV-Trainer could be established. The virtual reality simulator is a useful tool for training robotic surgery.

1. Introduction

Since the FDA approval of the da Vinci Surgical System (dVSS) (Intuitive Surgical, Sunnyvale, CA) for gynaecological surgery, there has been an exponential growth in robot-assisted gynaecologic surgical procedures [1]. The field of robot-assisted minimal invasive surgery is still expanding and currently involves many specialties, including urology, general surgery, cardiothoracic surgery, paediatric surgery, and gynaecology. Current and future developments will further increase the uptake of robot-assisted procedures for minimal invasive surgery [2].

With the increase in robotic procedures, there is a concomitant rising demand for training methods for the dVSS. For laparoscopic surgery, the advantages of an ex vivo training program are well established. Laparoscopic skills can be learned using inanimate box/video trainers [3] and/or virtual reality (VR) trainers [4]. The acquired skills can be transferred to real operations, leading to a shorter operating time and less errors [57]. For robotic surgery, the development of training programs has just started and, like in laparoscopy, calls for competence-based training programs. Especially in technological highly advanced surgical methods, such as robotic-assisted surgery, surgeons should be properly trained before embarking on performing surgical procedures in patients. For robotic surgery, guidelines for training and credentialing were described in a consensus statement in 2007 [8]. Current robotic training programs for residents may involve dry labs with inanimate training, observation, bedside assisting and live surgery training [9, 10]. The main disadvantages of dry lab training are the lack of objective automated assessment, and extra non-cost-effective training on robotic systems is necessary to be able to familiarize with and master the robot system. To overcome these limitations, VR simulation could be the solution in training this new surgical technique before embarking robotic surgery in patients [11].

In 2010, a VR trainer for robotic surgery, the dV-Trainer (dVT) (Mimic, Technologies, Seattle, WA, USA), was introduced. During the development of the system, training on a prototype compared with training on the dVSS provided similar improvement of robotic skills on the dVSS [12]. This assumes VR training could be used for training robotic skills. Before the dVT can be implemented in a gynaecologic training program for robotic surgery, the first steps of the validation process (face and construct validity) must be established [13]. We designed a prospective study to establish face and construct validity of the dVT among a large group of gynaecologists.

2. Methods

2.1. Participants

During the 2nd European Symposium on Robotic Gynaecologic Surgery, participants volunteering to the study were asked to complete three training modules on the VR simulator. The participants () were categorized, according to their experience with robotic surgery (total amount of robotic cases performed), into three groups. Group 1 (), “novice,” had no experience in robotic surgery, Group 2 (), “intermediate,” performed more than 5 and less than 50 robotic cases, and Group 3 (), “expert,” performed more than 70 robotic cases. The novice group consisted of students, residents, and medical specialists. The intermediate and expert groups consisted of gynaecologic surgeons with varying robotic experience. Prior laparoscopic experience was stated as the number of level I-II laparoscopic procedures (diagnostic, sterilization, tubectomy, salpingectomy, or cystectomy) and the number of level III-IV laparoscopic procedures ((radical) hysterectomy, lymphadenectomy, or sacrocolpopexy). This was according to the guidelines of the European Society of Gynaecologic Endoscopy (ESGE) [14].

2.2. Equipment

The dVT is a VR simulator especially designed for training robotic surgery with the dVSS. This simulator consists of a two-handed haptic system with grips that emulate the master grips on the surgeon’s console. Together with pedals and a high definition stereoscopic display, it simulates the console of the dVSS (Figure 1). The haptic component of the dVT includes a 580 MHz microprocessor and a 100 Mb Ethernet interface for data transfer and networking. The haptic device is networked with a computer that runs the dVT simulation software. The simulation system contains an automated system to measure different parameters of performance. The comprehensive training program of the dVT is subdivided in two sections. The “overview and basic skills training” consists of four modules: surgeons console overview, EndoWrist manipulation, camera and clutching, and trouble shooting. The “surgical skills training” includes the following four modules: needle control, needle driving, energy and dissection, and games. For this study we used the basic skill exercise “Camera Targeting,” the EndoWrist exercise “Peg Board,” and the surgical skill exercise “Thread the Rings” (Figure 2). All exercises have three levels of difficulty. For the purpose of this study we used the intermediate level (level 2) for all exercises.

507076.fig.001
Figure 1: The dV-Trainer (showing console, grips, and pedals (image provided by Mimic Technologies, Inc., Seattle, WA)).
507076.fig.002
Figure 2: Exercises (exercises used in this study: “Camera Targeting,” “Thread the Rings,” and “Peg Board” (image provided by Mimic Technologies, Inc., Seattle, WA)).
2.3. Face Validity

Face validity is defined as the extent to which the simulator resembles the situation in the real world [13]. To investigate this, all participants filled out a questionnaire immediately after performing all three exercises. The first section of the questionnaire contained several questions about demographics, previous experience with VR trainers, laparoscopy, and robotic surgery. The second section contained 28 questions regarding the simulator, the exercises, and the training capacity of the simulator. These questions were used for establishing face validity and were presented on a 5-point Likert scale [15]. Finally, three general statements concerning training robotic surgery were made. These statements could be answered with “yes,” “no,” or “no opinion”.

2.4. Construct Validity

Construct validity is defined as the ability of the simulator to distinguish the experienced from the inexperienced surgeon [13]. To investigate construct validity, all participants were asked to perform each of the three exercises twice. Before starting on the simulator, the exercises were briefly explained by the test supervisor and introduced with an instruction video for each exercise. The first run of each exercise was used for familiarization with the simulator only, and verbal instructions were given whenever necessary. The second run on each exercise was used for validation purposes and data analysis. The first exercise was “Camera Targeting” (level 2), in which the goal of the exercise is to “accurately position the camera while manipulating objects in a large workspace.” The second exercise was “Thread the Rings” (level 2), in which the goal is to “develop accuracy when driving a needle and practice two-handed needle manipulation with hand offs between instruments.” The third and last exercise was “Peg Board” (level 2) in which the goal is to “improve EndoWrist dexterity and coordinated two-handed motion. Practice handling of objects between instruments and learn to avoid unwanted collisions of instruments with the surrounding environment and to develop camera control skills in the context of a pegboard task.” For all three exercises, the outcome parameters were measured during the second run of the participant. The outcome parameters and their definition are shown in Table 1.

tab1
Table 1: Parameter definition.

2.5. Statistical Analysis

A sample size of on average 14 subjects per group allows for detection of between-group differences of 0.85 standard deviations (i.e., Cohen’s ) with 80% power, using and assuming a correlation between scores on the same individual of 0.5. The collected data were analyzed using the statistical software package SPSS 15.0 (SPSS Inc, Chicago, IL). Differences between the group performances were analyzed using the Kruskal-Wallis test. If there appeared to be a significant difference, then a comparison between two separate groups was conducted using the Mann-Whitney test for analysis of nonparametric data. A level of was considered to be statistically significant.

3. Results

3.1. Demographics

Forty-two subjects participated in this study. None of the participants had prior substantial experience with this simulator. Three participants, one in each group, had seen the simulator once before. There was no significant difference between the groups in prior experience with other VR simulators (). The prior laparoscopic experience of the intermediate and the expert groups was not significantly different for level I-II procedures () and level III-IV procedures (). In the expert group, a wide range of total robotic cases was performed (70–1200). Demographics are shown in Table 2.

tab2
Table 2: Demographics.

3.2. Face Validity

All participants completed the questionnaire. The mean scores regarding the simulator and the exercises are shown in Table 3. The realism of the simulator in general, visual graphics, movements of instruments, interaction with objects, and the depth perception were rated as realistic. The lowest score for the simulator in general was given for depth perception and for this item there was a significant difference between the novice group and intermediate group (). For all exercises, the participants stated that the goal of the exercise was surely reached. The training capacity of all separate exercises was rated to be “very good” by all groups. Compared to the novice group, the expert group rated the training capacity of “Thread the Rings” significantly higher (). The exercises were rated as “moderately difficult”; the “Peg Board” was considered to be the easiest exercise. The novice group and the intermediate group rated “Camera Targeting” significantly more difficult than the expert group (resp., and ). There were no other significant differences between the three groups.

tab3
Table 3: Face validity (simulator and exercises).

The mean scores regarding training capacity in general are shown in Table 4. The training capacity of the simulator in general was rated “very good” (4.7 ± 0.5 SD). The training capacity for eye-hand coordination (4.5 ± 0.7 SD), camera navigation (4.5  ±  0.9), instrument navigation (4.4  ±  0.8 SD), and use of pedals and clutch (4.60 ± 0.8 SD) were all appreciated by the participants. The simulator was rated as a “very useful” training tool for junior residents, senior residents, and fellows or medical specialists starting with robotic surgery. The simulator was rated “moderately useful” for training robotic experts (3.3 ± 1.4 SD). The participants thought the simulator to be less useful for warmup before surgery or for retention of skills.

tab4
Table 4: Face validity (training capacity).

At the end of the questionnaire, three general statements about training robotic surgery were given. Almost everyone agreed on the statement that surgeons starting with robotics should first start training on a virtual system (, , and ). Most of the participants (86%) think it is time for a competence or proficiency based training curriculum for robotic surgery (, , and ). And the majority (74%) agreed that such a curriculum should be mandatory (, , and ).

3.3. Construct Validity

All participants completed the three exercises on the dVT. For all parameters, the results and significant differences of the second run are shown in Table 5. Two important parameters showed a significant difference between all three groups; “economy of motion” in exercise 1 and “time to complete” in exercise 3. Comparison between the novice group and the expert group demonstrated the most significant differences. None of the other outcome parameters demonstrated a difference between novices and/or intermediates comparing more experienced colleagues. The performance variability of the most relevant parameters of the first two exercises are shown in box plots. The exercise “Camera Targeting” was most discriminative and showed significant differences in five parameters. There was less variability in the expert group (Figure 3). Four parameters showed significant difference in the exercise “Thread the Rings” (Figure 4). In the “Peg Board” exercise a significant difference was found in three parameters.

tab5
Table 5: Construct validity.
507076.fig.003
Figure 3: Exercise “Camera Targeting” (box plot of the four most important parameters in the exercise (bars are medians, boxes show inter quartile range, whiskers show range, are outliers, are extreme outliers, and large horizontal bars indicate statistically significant differences, specified with values)).
507076.fig.004
Figure 4: Exercise “Thread the Rings” (box plot of the four most important parameters in the exercise (bars are medians, boxes show inter quartile range, whiskers show range, are outliers, are extreme outliers, and large horizontal bars indicate statistically significant differences, specified with values)).

4. Discussion

In this study, the simulator showed good face validity. The dVT received high ratings on realism of the simulator itself and the separate exercises in all three groups. The training capacity of the simulator was rated “very good” for residents and gynaecologist starting with robotic surgery, but the simulator was found less useful for training experts. Perhaps the development of complex procedural tasks can add value for training experts who want to start performing new procedures. Using the dVT as warmup before surgery (to get familiar with the instruments again) or for retention of skills was not considered as real benefits of the simulator. Regarding the realism of the simulator, a remark should be made regarding the depth perception of the simulator. We noticed participants wearing multifocal glasses had a slight problem with the depth perception in the dVT. An explanation could be the difference in viewing distance in the dVT in contrast to length of this path in the dVSS. When participants changed their distance to the simulator/binoculars or did not wear their glasses during their performance, the problem regarding depth perception mainly declined. Unfortunately, in our questionnaire we did not ask participants if they wear glasses and therefore could not correlate this observation to results.

The simulator was able to differentiate between novices and experts for a number of parameters in each exercise (construct validity). “Time to complete” the exercise and “economy of motion” were the two most discriminating parameters. For most parameters there was a significant difference between novices and experts, except for the “number of drops” and the distance of the “instruments out of view.” A possible cause for the nonsignificance in the “number of drops” may be due to the relatively easy level of difficulty, which limited the amount of drops in all exercises. According to the nonsignificance in “instrument(s) out of view,” all three groups had short periods of time in which instruments were out of view. However, the experts seemed to be not “loosing” their instruments, as intermediates and in particular novices did. For these less-experienced participants, this might be an explanation for the increased time to complete the exercises compared to their more experienced colleagues. There was less difference between the expert and the intermediate group. An explanation could be that even the level two exercises are still too easy to show a difference between these groups. This is supported by the fact that the most difficult exercise (Camera Targeting) showed a significant difference for “economy of motion” and “time to complete” between these two groups.

This is the first study which investigates the validity of the dVT in gynaecology. Previous several small studies in urology were performed during the beta development phase of the simulator, using relatively easy exercises [1618]. Furthermore, the amount of participants () was never as extensive as in this study and a comparison between three groups (novice, intermediate, and expert) was never conducted, since other studies only compared two groups (novice versus expert). The acceptability of the dVT was first addressed by Lendvay et al. In their survey, during a postgraduate training course in paediatric robotic surgery, the majority of participants believed that the dVT trainer could teach robotic skills comparable to a dry lab robotics skills station [19]. A study of 19 novices and seven experts validated a prototype of the dVT, demonstrating face and content validity of the simulator, but did not show construct validity [18]. In a similar study existing of a total of 15 participants with varying degree of urological experience, acceptability, and preliminary face and content validity was demonstrated. A positive correlation between robotic experience and key performance metrics was found. The authors concluded that more research is needed and suggested that a prospective study, similar in design to this study, could help to determine the utility to integrate this simulator into a robotic training curriculum [17]. Kenney et al. showed face, content, and construct validity for the dVT as a VR simulator for the dVSS. Nineteen novices and 7 experts completed two EndoWrist modules and two needle driving modules [16]. In our study, we found that the dVT was also able to distinguish between three groups of participants with different levels of robotic experience. The dVT simulator seems to provide almost equal training capacities compared with the real dVSS [12]. Moreover, training on the dVT can actually improve performance on the robot system equal to training with the robot itself. Improvement of technical surgical performance can be achieved within a relatively short period of time [20, 21]. Another important question is if this VR system could also be used for assessment of robotic skills [22]. Recently, Perrenot et al. concluded in their study that the dVT proves to be a valid tool to assess basic skills of robotic surgery on the dVSS [23].

Other groups are working on the development of different VR simulators for robotic surgery and reported about their prototypes [24, 25]. There is one laparoscopic simulator which can be converted into a robotic simulator and can train basic robotic skills [26]; however, van der Meijden et al. were not able to establish construct validity for this simulator and improvement is necessary before using it in robotic training programs [27]. Recently, face and content validity for another VR simulator (robotic surgical simulator (ROSS)) for robotic surgery was established [28, 29].

With the introduction of VR simulators for robotic surgery, a new tool for robotic training and credentialing has become available. Until now, most training programs for robotic surgery consist of didactics, hands-on dry lab training, instructional videos, assistance at the operating table, and performance of segments of an operation [30]. From there, some authors recommend to start with easy procedures to get familiar with the dVSS [31]. Virtual reality simulation could be of great value in robotic training programs and allow surgeons to develop skills and pass a substantial part of their learning curve before operating on humans [32]. The VR simulators provide a controlled and pressure free environment with real-time objective measurements of the trainees performance, thereby offering useful feedback for adequate self-assessment. This could eventually improve operating time and patient safety. The recommended way to use a VR simulator as a training tool is to implement it in a competence-based training curriculum [10]. Almost all of the participants in our study thought it is time for the development of competence-based training curricula for robotic surgery, this instead of the now often used time-based curricula. A vast majority of the participants even thought such training should be mandatory before starting robotic surgery. This is important since we know from laparoscopy that providing expensive simulators to trainees, without implementing them in an obligatory training curriculum, will not motivate them enough to train voluntarily [33, 34].

Recently, the dVT software exercises became commercially available for use directly on the da Vinci Si console with the release of the new da Vinci Skills Simulator. The hardware is attached to the actual robotic system as a separate box, or “backpack.” The box contains the software and can be used with the new Si robot models as an add-on tool. In this way, virtual training on the actual robotic console is possible. The first validation studies for the da Vinci Skills Simulator demonstrated good face, content, and construct validity [35, 36]. Recently, a prospective randomized study demonstrated the most ultimate forms of validity (concurrent and predictive validity) for the da Vinci Skills Simulator. In this study, the authors demonstrated that a simulator trained group actually performed better in real surgery on the dVSS [37]. In the future, the development of new modules will continue and complete VR procedures, like the hysterectomy, which will probably become available for use in the dVSS or on the stand-alone VR simulators.

5. Conclusions

In conclusion, in this study, face and construct validity of the dVT was established. The simulator was regarded as a useful tool for training robotic surgery for the dVSS. For optimal use, the simulator should be implemented in validated competence-based robotic training curricula. Further studies regarding predictive validity need to show if simulator-learned skills are transferable to actual operations in the short run, and if so, whether or not positive effects on surgical performance remain on the long run.

Conflict of Interests

Jan E. U. Persson and René H. M. Verheijen are proctors for radical robot-assisted surgery, sponsored by Intuitive Surgery. For all other authors there is no conflict of interest. No sources of financial support are to be stated.

Acknowledgments

The authors would like to thank Assistant Professor M. J. C. (René) Eijkemans from the Julius Centre for Health Sciences and Primary care, University of Utrecht, the Netherlands, for his help with the statistical analysis. They would like to thank Tomas Ragnarsson, Jeff, and Andrea Berkley for their help on-site. The authors would also like to thank all the participants for their cooperation in this study.

References

  1. R. W. Holloway, S. D. Patel, and S. Ahmad, “Robotic surgery in gynecology,” Scandinavian Journal of Surgery, vol. 98, no. 2, pp. 96–109, 2009. View at Google Scholar · View at Scopus
  2. H. W. R. Schreuder and R. H. M. Verheijen, “Robotic surgery,” British Journal of Obstetrics and Gynaecology, vol. 116, no. 2, pp. 198–213, 2009. View at Publisher · View at Google Scholar · View at Scopus
  3. M. E. Ritter and D. J. Scott, “Design of a proficiency-based skills training curriculum for the fundamentals of laparoscopic Surgery,” Surgical Innovation, vol. 14, no. 2, pp. 107–112, 2007. View at Publisher · View at Google Scholar · View at Scopus
  4. K. Gurusamy, R. Aggarwal, L. Palanivelu, and B. R. Davidson, “Systematic review of randomized controlled trials on the effectiveness of virtual reality training for laparoscopic surgery,” British Journal of Surgery, vol. 95, no. 9, pp. 1088–1097, 2008. View at Publisher · View at Google Scholar · View at Scopus
  5. G. Ahlberg, L. Enochsson, A. G. Gallagher et al., “Proficiency-based virtual reality training significantly reduces the error rate for residents during their first 10 laparoscopic cholecystectomies,” American Journal of Surgery, vol. 193, no. 6, pp. 797–804, 2007. View at Publisher · View at Google Scholar · View at Scopus
  6. C. R. Larsen, J. L. Soerensen, T. P. Grantcharov et al., “Effect of virtual reality training on laparoscopic surgery: randomised controlled trial,” British Medical Journal, vol. 338, p. b1802, 2009. View at Publisher · View at Google Scholar · View at Scopus
  7. A. S. Thijssen and M. P. Schijven, “Contemporary virtual reality laparoscopy simulators: quicksand or solid grounds for assessing surgical trainees?” American Journal of Surgery, vol. 199, no. 4, pp. 529–541, 2010. View at Publisher · View at Google Scholar · View at Scopus
  8. D. M. Herron and M. Marohn, “A consensus document on robotic surgery,” Surgical Endoscopy, vol. 22, no. 2, pp. 313–325, 2008. View at Publisher · View at Google Scholar
  9. M. A. Finan, M. E. Clark, and R. P. Rocconi, “A novel method for training residents in robotic hysterectomy,” Journal of Robotic Surgery, vol. 4, no. 1, pp. 33–39, 2010. View at Publisher · View at Google Scholar · View at Scopus
  10. H. W. R. Schreuder, R. Wolswijk, R. P. Zweemer, M. P. Schijven, and R. H. M. Verheijen, “Training and learning robotic surgery, time for a more structured approach: a systematic review,” British Journal of Obstetrics and Gynaecology, vol. 119, no. 2, pp. 137–149, 2012. View at Publisher · View at Google Scholar · View at Scopus
  11. J. M. Albani and D. I. Lee, “Virtual reality-assisted robotic surgery simulation,” Journal of Endourology, vol. 21, no. 3, pp. 285–287, 2007. View at Publisher · View at Google Scholar · View at Scopus
  12. M. A. Lerner, M. Ayalew, W. J. Peine, and C. P. Sundaram, “Does training on a virtual reality robotic simulator improve performance on the da Vinci surgical system?” Journal of Endourology, vol. 24, no. 3, pp. 467–472, 2010. View at Publisher · View at Google Scholar · View at Scopus
  13. E. M. McDougall, “Validation of surgical simulators,” Journal of Endourology, vol. 21, no. 3, pp. 244–247, 2007. View at Publisher · View at Google Scholar · View at Scopus
  14. “ESGE-Standard laparoscopy,” 2013, http://www.esge.org/education/guidelines.
  15. M. S. Matell and J. Jacoby, “Is there an optimal number of alternatives for likert scale items?” Educational and Psychological Measurement, vol. 31, no. 3, pp. 657–667, 1971. View at Publisher · View at Google Scholar
  16. P. A. Kenney, M. F. Wszolek, J. J. Gould, J. A. Libertino, and A. Moinzadeh, “Face, content, and construct validity of dV-trainer, a novel virtual reality simulator for robotic surgery,” Urology, vol. 73, no. 6, pp. 1288–1292, 2009. View at Publisher · View at Google Scholar · View at Scopus
  17. T. S. Lendvay, P. Casale, R. Sweet, and C. Peters, “Initial validation of a virtual-reality robotic simulator,” Journal of Robotic Surgery, vol. 2, no. 3, pp. 145–149, 2008. View at Publisher · View at Google Scholar · View at Scopus
  18. A. S. Sethi, W. J. Peine, Y. Mohammadi, and C. P. Sundaram, “Validation of a novel virtual reality robotic simulator,” Journal of Endourology, vol. 23, no. 3, pp. 503–508, 2009. View at Publisher · View at Google Scholar · View at Scopus
  19. T. S. Lendvay, P. Casale, R. Sweet, and C. Peters, “VR robotic surgery: randomized blinded study of the dV-Trainer robotic simulator,” Studies in Health Technology and Informatics, vol. 132, pp. 242–244, 2008. View at Google Scholar · View at Scopus
  20. S. G. Kang, K. S. Yang, Y. H. Ko et al., “A Study on the learning curve of the robotic virtual reality simulator,” Journal of Laparoendoscopic & Advanced Surgical Techniques, vol. 22, pp. 438–462, 2012. View at Google Scholar
  21. R. Korets, A. C. Mues, J. A. Graversen et al., “Validating the use of the Mimic dV-trainer for robotic surgery skill acquisition among urology residents,” Urology, vol. 78, no. 6, pp. 1326–1330, 2011. View at Publisher · View at Google Scholar · View at Scopus
  22. J. Y. Lee, P. Mucksavage, D. C. Kerbl, V. B. Huynh, M. Etafy, and E. M. McDougall, “Validation study of a virtual reality robotic simulatorrole as an assessment tool?” Journal of Urology, vol. 187, no. 3, pp. 998–1002, 2012. View at Publisher · View at Google Scholar · View at Scopus
  23. C. Perrenot, M. Perez, N. Tran et al., “The virtual reality simulator dV-Trainer is a valid assessment tool for robotic surgical skills,” Surgical Endoscopy, vol. 26, pp. 2587–2593, 2012. View at Publisher · View at Google Scholar · View at Scopus
  24. D. Katsavelis, K.-C. Siu, B. Brown-Clerk et al., “Validated robotic laparoscopic surgical training in a virtual-reality environment,” Surgical Endoscopy and other Interventional Techniques, vol. 23, no. 1, pp. 66–73, 2009. View at Publisher · View at Google Scholar · View at Scopus
  25. L. W. Sun, F. van Meer, J. Schmid, Y. Bailly, A. A. Thakre, and C. K. Yeung, “Advanced da Vinci surgical system simulator for surgeon training and operation planning,” International Journal of Medical Robotics and Computer Assisted Surgery, vol. 3, no. 3, pp. 245–251, 2007. View at Publisher · View at Google Scholar · View at Scopus
  26. F. H. Halvorsen, O. J. Elle, V. V. Dalinin et al., “Virtual reality simulator training equals mechanical robotic training in improving robot-assisted basic suturing skills,” Surgical Endoscopy and other Interventional Techniques, vol. 20, no. 10, pp. 1565–1569, 2006. View at Publisher · View at Google Scholar · View at Scopus
  27. O. A. van der Meijden, I. A. Broeders, and M. P. Schijven, “The SEP, “Robot”: a valid virtual reality robotic simulator for the da vinci surgical system?” Surgical Technology International, vol. 19, pp. 51–58, 2010. View at Google Scholar
  28. S. A. Seixas-Mikelus, T. Kesavadas, G. Srimathveeravalli, R. Chandrasekhar, G. E. Wilding, and K. A. Guru, “Face validation of a novel robotic surgical simulator,” Urology, vol. 76, no. 2, pp. 357–360, 2010. View at Publisher · View at Google Scholar · View at Scopus
  29. S. A. Seixas-Mikelus, A. P. Stegemann, T. Kesavadas et al., “Content validation of a novel robotic surgical simulator,” BJU International, vol. 107, no. 7, pp. 1130–1135, 2011. View at Publisher · View at Google Scholar · View at Scopus
  30. P. S. Lee, A. Bland, F. A. Valea, L. J. Havrilesky, A. Berchuck, and A. A. Secord, “Robotic-assisted laparoscopic gynecologic procedures in a fellowship training program,” Journal of the Society of Laparoendoscopic Surgeons, vol. 13, no. 4, pp. 467–472, 2009. View at Publisher · View at Google Scholar · View at Scopus
  31. J. L. Ferguson, T. M. Beste, K. H. Nelson, and J. A. Daucher, “Making the transition from standard gynecologic laparoscopy to robotic laparoscopy,” Journal of the Society of Laparoendoscopic Surgeons/Society of Laparoendoscopic Surgeons, vol. 8, no. 4, pp. 326–328, 2004. View at Google Scholar · View at Scopus
  32. R. Al Bareeq, S. Jayaraman, B. Kiaii, C. Schlachta, J. D. Denstedt, and S. E. Pautler, “The role of surgical simulation and the learning curve in robot-assisted surgery,” Journal of Robotic Surgery, vol. 2, no. 1, pp. 11–15, 2008. View at Publisher · View at Google Scholar · View at Scopus
  33. L. Chang, J. Petros, D. T. Hess, C. Rotondi, and T. J. Babineau, “Integrating simulation into a surgical residency program: is voluntary participation effective?” Surgical Endoscopy and other Interventional Techniques, vol. 21, no. 3, pp. 418–421, 2007. View at Publisher · View at Google Scholar · View at Scopus
  34. K. W. van Dongen, W. A. van der Wal, I. H. M. B. Rinkes, M. P. Schijven, and I. A. M. J. Broeders, “Virtual reality training for endoscopic surgery: voluntary or obligatory?” Surgical Endoscopy and other Interventional Techniques, vol. 22, no. 3, pp. 664–667, 2008. View at Publisher · View at Google Scholar · View at Scopus
  35. A. J. Hung, P. Zehnder, M. B. Patil et al., “Face, content and construct validity of a novel robotic surgery simulator,” Journal of Urology, vol. 186, pp. 1019–1024, 2011. View at Google Scholar
  36. K. T. Finnegan, I. Staff, A. M. Mareaney, and S. J. Shichman, “Da Vinci Skills Simulator construct validation study: correlation of prior robotic experience with overallscore and score simulator performance,” Urology, vol. 80, pp. 330–336, 2012. View at Google Scholar
  37. A. J. Hung, M. B. Patil, P. Zehnder et al., “Concurrent and predictive validation of a novel robotic surgery simulator: a prospective, randomized study,” Journal of Urology, vol. 187, pp. 630–637, 2012. View at Google Scholar