You are on page 1of 8

World J Surg (2011) 35:245252 DOI 10.

1007/s00268-010-0861-1

ORIGINAL SCIENTIFIC REPORTS

Effects of Virtual Reality Simulator Training Method and Observational Learning on Surgical Performance
Christopher W. Snyder Marianne J. Vandromme Sharon L. Tyra John R. Portereld Jr. Ronald H. Clements Mary T. Hawn

Published online: 18 November 2010 Societe Internationale de Chirurgie 2010

Abstract Background Virtual reality (VR) simulators and Webbased instructional videos are valuable supplemental training resources in surgical programs, but it is unclear how to optimally integrate them into minimally invasive surgical training. Methods Medical students were randomized to prociency-based training on VR laparoscopy and endoscopy simulators by two different methods: proctored training (automated simulator feedback plus human expert feedback) or independent training (simulator feedback alone). After achieving simulator prociency, trainees performed a series of laparoscopic and endoscopic tasks in a live porcine model. Prior to their entry into the animal lab, all trainees watched an instructional video of the procedure and were randomly assigned to either observe or not observe the actual procedure before performing it themselves. The joint effects of VR training method and procedure observation on time to successful task completion were evaluated with Cox regression models.

Results Thirty-two students (16 proctored, 16 independent) completed VR training. Cox regression modeling with adjustment for relevant covariates demonstrated no signicant difference in the likelihood of successful task completion for independent versus proctored training [Hazard Ratio (HR) 1.28; 95% Condence Interval (CI) 0.961.72; p = 0.09]. Trainees who observed the actual procedure were more likely to be successful than those who watched the instructional video alone (HR 1.47; 95% CI 1.091.98; p = 0.01). Conclusions Proctored VR training is no more effective than independent training with respect to surgical performance. Therefore, time-consuming human expert feedback during VR training may be unnecessary. Instructional videos, while useful, may not be adequate substitutes for actual observation when trainees are learning minimally invasive surgical procedures.

Introduction Minimally invasive surgery (MIS) is evolving rapidly, with novel developments such as natural orice translumenal endoscopic surgery (NOTES), single-port laparoscopic surgery, and hybrid techniques [13]. As new MIS techniques are sought by patients and established into general surgical practice, training programs will be expected to prepare surgical trainees to perform these techniques. However, resource constraints may require future surgical trainees to acquire and practice basic MIS skills outside of the operating room. Virtual reality (VR) surgical simulation and Web-based multimedia instruction are useful supplemental approaches to meeting the challenges of efcient, effective, and ethical MIS training [46]. Prociency-based VR simulator training, in which trainees are required to achieve standardized performance

C. W. Snyder M. J. Vandromme Department of Surgery, University of Alabama at Birmingham, 1922 7th Avenue South, KB 428, Birmingham, AL 35294-0016, USA S. L. Tyra J. R. Portereld Jr. R. H. Clements M. T. Hawn (&) Section of Gastrointestinal Surgery, Department of Surgery, University of Alabama at Birmingham, Birmingham, AL, USA e-mail: mhawn@uab.edu M. T. Hawn Center for Surgical, Medical Acute Care Research and Transitions, Birmingham Veterans Affairs Medical Center, Birmingham, AL, USA

123

246

World J Surg (2011) 35:245252

goals, requires signicant time and effort but improves early laparoscopic and endoscopic performance [712]. One advantage of VR simulators over simpler inanimate practice models is their ability to provide immediate, objective, automated performance feedback. However, most previous studies of prociency-based VR training have not taken full advantage of automated simulator feedback, instead utilizing a proctored approach in which human experts provide feedback while trainees practice on the simulator. An earlier study by our group demonstrated that an independent approach to VR training, in which trainees rely solely on automated simulator feedback, requires fewer training hours to achieve simulator prociency than a proctored approach [13]. However, the predictive validity of VR simulator training with human expert feedback (a proctored approach) versus automated feedback alone (an independent approach) for real surgical performance is unknown. Observational learning has historically played an important role in surgical training, constituting the rst step in the time-honored see one, do one, teach one model [14]. Observation, however, is no longer limited to the operating room; many professional organizations now offer free Web-based videos of surgical procedures for training purposes. Such training videos are undoubtedly valuable resources, but it is unclear how they compare with the reallife observation of procedures. Surgical training programs may want to deliver VR simulator training and surgical training videos to their junior residents in a systematic fashion. These supplemental training modalities are not meant to replace real operative experience or be used in isolation, but rather must be integrated into existing surgical training curricula. Data are needed regarding the relative effectiveness of different methods of VR training and observational learning. Previous studies suggest that observation, simulator practice, and expert feedback can all contribute to the learning process, but these different training modalities may interact with each other in a complex fashion [1517]. We performed a randomized trial evaluating the joint effects of VR simulator training method and observational learning method on the performance of laparoscopic and endoscopic tasks in an animal model.

Review Board and Institutional Animal Care and Use Committee. Students were initially randomized to independent or proctored VR simulator training with permuted block randomization to ensure equal group sizes. Simulator training A prociency-based training curriculum was developed on laparoscopic (LapSim, Surgical Science, Goteborg, Sweden) and endoscopic (Accutouch Lower GI, Immersion, Gaithersburg, MD) simulators. Expert prociency criteria were derived from the median scores of a group of attending gastrointestinal (GI) surgeons and interventional gastroenterologists. These expert prociency criteria were posted on the simulators for easy trainee access. All trainees in both groups had 24-h access to the simulators for practice and received objective, automated simulator feedback after each repetition. The proctored group also attended three small-group simulator training sessions taught by senior General Surgery residents. In these sessions, the instructor rst demonstrated progressively more difcult simulator tasks, then observed each trainee performing the task, and nally provided feedback on trainee performance. Trainees were given 8 weeks to achieve expert-level prociency. Complete details on the training curriculum, expert prociency criteria, and comparative efciency of the independent and proctored training methods have been published separately [13]. Animal lab procedure After demonstrating prociency on both simulators, trainees were eligible to participate in a live animal surgery lab, where they performed a series of laparoscopic and endoscopic tasks on pigs weighing 22.527.0 kg. The animal lab procedure was designed to mimic a hybrid procedure involving NOTES and laparoscopy; it comprised eight minimally invasive surgical tasks performed in sequence (Table 1). Prior to participating in the animal lab, all trainees were required to watch an instructional video that explained and demonstrated each task in detail. The narrated video included whiteboard diagrams of tasks and video clips of the tasks being performed from the perspective of laparoscopic and endoscopic cameras. Trainees had unlimited access to the video via a secure Website beginning one week in advance of their scheduled animal lab. For the rst ve tasks, trainees used a 10 mm singlechannel upper endoscope (Olympus America Inc., Center Valley, PA), a 15 mm endoscopic snare, and toothed endoscopic biopsy forceps. For the nal three tasks, trainees used a standard zero-degree laparoscope and various laparoscopic instruments, including Maryland graspers and a 10 mm clip applier (Covidien, Manseld, MA).

Materials and methods Study participants Medical students from the University of Alabama at Birmingham (UAB) were recruited for the study via mass e-mail announcements and presentations at student meetings. The study was approved by the UAB Institutional

123

World J Surg (2011) 35:245252 Table 1 Animal lab task descriptions Task Peritoneal access Endoscopic peritoneoscopy Endoscopic biopsy Bowel manipulation Specimen retrieval Gastrotomy clipping Laparoscopic biopsy Objective Drive endoscope from stomach into peritoneal cavity, along guidewire placed through existing gastrotomy Visualize and correctly identify all four quadrants of the peritoneal cavity with the endoscope Biopsy two marked targets on the lower abdominal wall with endoscopic forceps

247

Visualize bowel loop marked with suture, grasp suture tail with endoscopic forceps, pull loop up to gastrotomy Visualize specimen (ketchup packet), capture with endoscopic snare, and pull back to gastrotomy Grasp gastrotomy with laparoscopic grasper and apply two clips to edge of gastrotomy Biopsy two marked targets on the lower abdominal wall with laparoscopic grasper

Laparoscopic peritoneoscopy Visualize and correctly identify all four quadrants of the peritoneal cavity with the laparoscope

Pre-procedure setup was uniform for each animal. Three laparoscopic ports were placed in standardized positions and the stomach was intubated with the endoscope. With laparoscopic assistance, a guidewire was passed via the endoscope into the peritoneal cavity through a 20 mm gastrotomy made in the anterior body of the stomach. The peritoneal cavity was then accessed with the endoscope. Two targets, representing metastatic lesions, were marked for endoscopic and laparoscopic biopsy by injecting a small amount of black ink into the preperitoneal space approximately 5 cm below the umbilicus in the midline and just lateral to the border of the left rectus abdominus muscle. For the bowel manipulation task, a purple 2-0 braided absorbable suture was placed in an upper loop of jejunum, leaving 2 cm tails. For the specimen retrieval task, a standard 1.3 9 2.5 9 7.5 cm individual ketchup packet was placed in the peritoneal cavity superior to the dome of the liver. Once setup was complete, two to four trainees were assigned to each animal. The trainee who went rst (i.e., the one who did not observe) was selected randomly by ipping a coin. One trainee assisted the operator in a standardized fashion by performing tasks such as holding the laparoscopic camera while others observed. Thus, it was possible for trainees to watch the actual procedure up to three times before performing it themselves. Each trainee began the procedure with the endoscope in the stomach and a transgastric guidewire in the working channel of the scope. Animals were sacriced by trained animal lab technicians in a humane manner after the lab was completed. Measurement of endpoints Successful completion of each task and the time required for completion was measured by stopwatch by an attending surgeon or senior resident serving as scorekeeper. A time limit was imposed for each task, and trainees exceeding the limit were censored at the maximum time. Scorekeepers could provide verbal advice and feedback while the trainee

performed the procedures, but they were not allowed to handle the instruments or otherwise provide physical assistance. Scorekeepers were blinded to the training method of the trainees, with the exception of eight trainees for whom no suitably blinded scorekeepers were available. These cases were documented so that the resulting potential bias could be addressed in the analytic phase of the study. To conrm the construct validity of the tasks, three attending GI surgeons performed the procedure and their times to successful task completion were recorded. Statistical analysis Baseline characteristics of trainees in the proctored and independent groups were compared using Wilcoxon signed-rank tests for continuous variables and chi-square or Fishers exact tests for categorical variables. The outcome of interest was time to successful procedure completion; shorter times to success indicated better animal lab performance. Time-to-success curves were compared between attending GI surgeons (experts) and trainees, using the logrank test to conrm the construct validity of each task. Trainees were categorized into four groups corresponding to the four possible combinations of the two dichotomous main effect variables: VR training method (proctored versus independent) and observation status (observed versus did not observe). Time-to-success curves for the main effects were generated with the KaplanMeier method and compared with the log-rank test. Time to success for all tasks combined was assessed with a Cox proportional hazards model, using the gap time modication for multiple events [18]. Main effects in the model were the VR training method and observation status. Multiple covariates were also assessed, including demographics, learning style, prior experience, baseline VR simulator performance, time required to achieve simulator prociency, and elapsed time interval between completion of simulator training and animal lab participation. Interaction terms between the two main effects and covariates were also evaluated and found to be nonsignicant. The nal model included the main

123

248

World J Surg (2011) 35:245252

effects and any other variables found to be signicant predictors of successful task completion. The proportional hazards assumption was tested for all included variables and was not violated. Statistical analyses were performed with SAS 9.2 (SAS Institute, Cary, NC).

Results Forty-three students volunteered for the study, but resource limitations meant that only the rst 36 could be enrolled. Of the 36 enrolled, 32 (89%) completed VR simulation training and participated in the animal lab. Because of scheduling conicts, four students (two proctored, two independent) dropped out during simulator training. Of those completing simulator training, 29 (91%) achieved simulator prociency during the allotted eight-week training period, and the remaining three (two independent, one proctored) required one to two additional training days to achieve prociency. Figure 1 summarizes the progression of subjects through the study [19]. Characteristics of the

trainees are given in Table 2. Demographics and previous exposure to laparoscopy and endoscopy were statistically similar between groups, but the baseline laparoscopic simulator performance of the independent group was slightly faster than that of the proctored group. The performance of experts and trainees on the live animal model is compared in Table 3. Experts were faster and more likely to achieve success than trainees on all animal lab tasks except gastrotomy clipping. The gastrotomy clipping task did not achieve statistical signicance because the expert group was small, and one expert required much more time to achieve success than the other two. Because the gastrotomy clipping task did not achieve statistical construct validity, it was excluded from subsequent proportional hazards modeling. Table 4 compares the median times to successful completion of each individual task for the main effects: VR training method and observation status. Median completion times were similar for the independent and proctored VR training groups for all tasks except laparoscopic biopsy, in which the independent group was signicantly faster and more likely to

Fig. 1 Flow diagram of subject progression through the study

123

World J Surg (2011) 35:245252 Table 2 Trainee characteristicsa Proctored (n = 16) Observed procedure prior to performing Age (years) Female sex Medical school year 12 34 Accommodating/converging Kolb learning style Video game experience, cumulative (h/week) Previous exposure to laparoscopy Previous exposure to endoscopy Baseline endoscopic simulator performance Mucosa visualization efciency (%/min) Scope insertion speed (cm/min) Baseline laparoscopic simulator performance Grasping (targets/min) Instrument navigation time (s) Coordination time (s) Median VR training hours to achieve prociencyb Time from end of VR training to animal lab (days)
a b

249

Independent (n = 16) 10 (63) 24 (2226) 2 (13) 10 (63) 6 (37) 14 (88) 5 (045) 10 (63) 8 (50) 0.9 (0.23.6) 2.7 (0.66.3) 0 (04.2) 53 (3677) 107 (76124) 11.0 13 (028)

p Value 1.0 0.41 0.65 0.22 1.0 0.37 0.16 0.27 0.73 0.08 0.04 0.02 0.06 0.34 0.63

10 (63) 25 (2029) 4 (25) 14 (88) 2 (12) 13 (81) 12 (050) 5 (31) 4 (25) 1.3 (05.3) 4.5 (1.49.7) 0 (01.0) 69 (4684) 115 (80177) 11.8 13 (028)

Categorical and continuous variables expressed as N (%) and median (range), respectively Compared with log-rank test due to censoring

Table 3 Comparison of expert and trainee animal lab performance Task completion time (s), median (interquartile range) Experts (n = 3) Peritoneal access Endoscopic peritoneoscopy Endoscopic biopsy Bowel manipulation Specimen retrieval Laparoscopic peritoneoscopy Gastrotomy clipping Laparoscopic biopsy
a

Trainees (n = 32) 43 (2562) 120 (62209) 160 (93194) 136 (101205) 179 (136266) 26 (2049) 127 (74167) 104 (72132)

Median time ratio (trainees:experts) 1.9 5.0 2.8 4.5 2.2 1.9 6.4 2.2

p Valuea

23 (2125) 24 (1855) 57 (3870) 30 (27113) 83 (4286) 14 (914) 20 (15195) 47 (3065)

0.03 \0.01 \0.01 \0.01 \0.01 \0.01 0.40 \0.01

Log-rank test

succeed. Trainees who had observed had consistently faster median completion times than those who had not observed, although this pattern achieved signicance only for the access and laparoscopic peritoneoscopy tasks individually. The nal Cox proportional hazards model contained the training group, observation status, gender, and the time interval between VR training completion and animal lab participation. No other variables approached signicance and no inter-variable interactions were observed. There was a nonsignicant trend toward greater likelihood of success for the independent group compared to the proctored group [Hazard Ratio (HR) 1.28; 95% Condence

Interval (CI) 0.961.72; p = 0.09]. Trainees who observed the actual procedure in addition to watching the instructional video had a signicantly greater likelihood of success than those who watched the instructional video alone (HR 1.47; 95% CI 1.091.98; p = 0.01). Figure 2 shows hazard ratios and condence intervals for the joint effects of VR training method and observation, with the proctoreddid not observe group as the reference group. Females were less likely to successfully complete the tasks in the allotted time (HR = 0.55; 95% CI 0.360.82; p = 0.004). The amount of time between VR training completion and animal lab participation also had a

123

250 Table 4 Animal lab performance by VR training method and observation status Task completion time in seconds, median (IQR) Independent (n = 16) Peritoneal access Endoscopic peritoneoscopy Endoscopic biopsy Bowel manipulation Specimen retrieval Laparoscopic peritoneoscopy Gastrotomy clipping Laparoscopic biopsy
a b

World J Surg (2011) 35:245252

Proctored (n = 16) 47 (21117) 77 (54209) 177 (113207) 127 (108210) 198 (143316) 35 (1949) 127 (89174) 123 (91160)

p Valuea 0.21 0.28 0.17 0.90 0.22 0.95 0.62 0.02

Did not observe (n = 12) 61 (39180) 167 (98274) 160 (105173) 139 (119238) 201 (143309) 44 (2667) 127 (67163) 113 (67129)

Observed (n = 20) 33 (2147) 77 (56165) 164 (92202) 136 (96195) 179 (116258) 22 (1742) 125 (89167) 94 (72142)

p Valueb \0.01 0.19 0.51 0.30 0.59 0.02 0.90 0.84

37 (2852) 121 (76218) 130 (85168) 151 (77205) 165 (129258) 23 (2051) 132 (63160) 89 (63113)

Log-rank test, independent versus proctored groups Log-rank test, observed versus did not observe groups

Fig. 2 Joint effects of virtual reality (VR) training method and observation on overall animal lab performance

signicant negative effect on successful task completion. For each day elapsed, the likelihood of success decreased by 2% (HR 0.98; 95% CI 0.970.99; p = 0.009).

Discussion Our study demonstrated similar operative performance among medical students trained to VR prociency by independent and proctored methods, with a trend toward better performance in the independent (automated feedback only) group. Those who watched an actual procedure in addition to the instructional video performed signicantly better than those who watched the instructional video alone, and the benet of real-life observation was unaffected by the VR training method. While various aspects of the relationships between observation, practice, and various forms of feedback in

learning psychomotor skills have been studied previously, many historical studies have investigated only simple skills, with questionable generalizability to complex minimally invasive surgical tasks [20, 21]. Studies of psychomotor skills learning are highly context-dependent and often arrive at seemingly contradictory conclusions. We observed that human expert feedback during practice conferred no advantage over automated simulator feedback, but the effect of feedback on learning likely depends on the task complexity. For learning simple open suturing and knot-tying, computer-based instruction has similar efcacy regardless of whether live expert feedback is employed [22]. For more complex skills, feedback during practice results in improved skill acquisition compared with practice alone [23], although excessive feedback may actually impede learning [24]. The previous body of work suggests that some level of feedback while practicing complex skills is benecial; and our data suggest that for

123

World J Surg (2011) 35:245252

251

VR simulator training, automated simulator feedback is sufcient without the need for additional feedback from a human expert. This is likely attributable to the sophisticated level of automated feedback metrics available from current VR simulators. In the context of simulator training to acquire basic MIS skills, trainees may value the simulator feedback as being clearer and more objective than human expert feedback. However, this attitude would not be expected to apply to the performance of complex realworld operative procedures, in which direct instructor feedback is essential and highly valued. Expert feedback may also be superior to automated feedback with respect to long-term skill retention, which was not assessed in the present study [25]. To our knowledge, no previous studies have compared the effectiveness of actual procedure observation to an instructional video alone for acquiring MIS skills. However, previous studies have demonstrated that observing a procedure before attempting it results in improved performance [15]. Consistent with sports training literature, our data demonstrate that observational learning is effective even when a relatively unskilled participant (i.e., another trainee) is the object of observation [26]. The benet of observation was probably heightened by the observers knowledge that they would soon be expected to perform the procedure themselves, the so-called intention superiority effect [27]. The effect of the verbal feedback provided by the scorekeepers was not evaluated; previous studies have arrived at differing conclusions regarding the effect of expert feedback on observational learning [28, 29]. We found signicant gender differences in animal lab performance, but this nding must be interpreted with caution given the small number of females enrolled. Further studies involving larger groups and more balanced numbers of males and females are needed to reach valid conclusions regarding possible gender differences. Video game experience was not a signicant predictor of success and its inclusion in the model did not change the results, conrming the poor predictive validity of video game experience for real operative performance [12]. We also observed that longer time intervals between VR training completion and animal lab participation were associated with poorer performance, consistent with previously reported deterioration of irregularly practiced VR-acquired skills over time [30]. Our study must be interpreted in the context of several limitations. First, potential observer bias was introduced by the inclusion of cases where the expert scorekeeper was not blinded to the VR training method. However, a nonblinded status indicator variable was not signicant in our statistical models, and its inclusion had no effect on our results. Similarly, exclusion of the non-blinded

observations reduced the power of the analyses but did not substantially change the hazard ratios. Although time to success is an objective performance measure that would be expected to minimize inter-rater variability, it fails to capture inappropriate instrument use or excessively rough tissue handling. The procedure was standardized and animals were similar, but anatomic variations may have introduced confounding factors. The sample size was small because lab time was limited, as were equipment, stafng, and animals, which, in turn, limits the power of the study. However, by having each subject perform multiple tasks and analyzing all tasks combined, we were able to overcome this limitation. The power to detect a hazard ratio (i.e., relative risk) of 1.5 and 2.0 for either main effect on any individual task was only 17 and 37%, respectively. For all tasks combined, the power improved to 77 and 99%, respectively. This study was performed in medical students, so its generalizability to junior surgical residents the real population of interestis limited. Students differ from residents in that they have neither the cognitive contextual framework nor the opportunity to apply their newly learned skills on a regular basis. Finally, we did not include an untrained control group for comparison with the VR-trained groups, so we were unable to address whether prociency-based VR training is better than no training for hybrid minimally invasive tasks. However, the observation that performance was best immediately following VR training suggests that the training was effective. Furthermore, multiple previous studies have demonstrated the superiority of prociency-based VR training over no training at all [7, 9, 31]. In conclusion, our study suggests that the provision of human expert feedback during VR simulator practice provides no benets over automated simulator feedback alone. Therefore, surgical training programs may not need to devote time and effort to provide expert feedback during prociency-based VR simulator training, as long as the prociency criteria are available and clear. Surgical trainees should recognize that while instructional videos and VR training are useful resources, they do not appear to be adequate substitutes for observation of complex procedures. Observing surgical procedures, in the context of deliberate practice and directed hands-on training, remains a valuable learning experience, even when the operator is a relatively unskilled trainee. Further studies of this topic should be performed in junior surgical residents to validate these conclusions.
Acknowledgments The authors acknowledge Joshua L. Argo and Wai M. A. Yeung for their assistance with data acquisition. They are also grateful to Olympus of America and Covidien for equipment donation and technical assistance. The study received no external funding, although Christopher W. Snyder received salary support under an educational grant from Olympus America Inc.

123

252 Conict of interest Dr. Snyder received salary support under an educational grant from Olympus America Inc. Drs. Vandromme, Argo, Yeung,Portereld, Clements, and Hawn have no conicts of interest or nancial ties to disclose. Ms. Tyra has no conicts onterest or nancial ties to disclose.

World J Surg (2011) 35:245252 15. Custers EJ, Regehr G, McCulloch W et al (1999) The effects of modeling on learning a simple surgical procedure: see one, do one or see many, do one? Adv Health Sci Educ Theory Pract 4:123143 16. Laguna PL (2008) Task complexity and sources of task-related information during the observational learning process. J Sports Sci 26:10971113 17. Weeks DL, Anderson LP (2000) The interaction of observational learning with overt practice: effects on motor skill learning. Acta Psychol (Amst) 104:259271 18. Prentice RL, Williams BJ, Peterson AV (1981) On the regression analysis of multivariate failure time data. Biometrika 68:373379 19. Moher D, Schulz KF, Altman D (2001) The CONSORT statement: revised recommendations for improving the quality of reports of parallel-group randomized trials. J Am Med Assoc 285:19871991 20. Kaufman HH, Wiegand RL, Tunick RH (1987) Teaching surgeons to operateprinciples of psychomotor skills training. Acta Neurochir (Wien) 87:17 21. Wulf G, Shea CH (2002) Principles derived from the study of simple skills do not generalize to complex skill learning. Psychon Bull Rev 9:185211 22. Xeroulis GJ, Park J, Moulton CA et al (2007) Teaching suturing and knot-tying skills to medical students: a randomized controlled study comparing computer-based video instruction and (concurrent and summary) expert feedback. Surgery 141:442449 23. Risucci D, Cohen JA, Garbus JE (2001) The effects of practice and instruction on speed and accuracy during resident acquisition of simulated laparoscopic skills. Curr Surg 58:230235 24. Badets A, Blandin Y (2004) The role of knowledge of results frequency in learning through observation. J Mot Behav 36:6270 25. Porte MC, Xeroulis G, Reznick RK et al (2007) Verbal feedback from an expert is more effective than self-accessed feedback about motion efciency in learning new surgical skills. Am J Surg 193:105110 26. Pollock BJ, Lee TD (1992) Effects of the models skill level on observational motor learning. Res Q Exerc Sport 63:2529 27. Badets A, Blandin Y, Bouquet CA et al (2006) The intention superiority effect in motor skill learning. J Exp Psychol Learn Mem Cogn 32:491505 28. Bergamaschi R, Dicko A (2000) Instruction versus passive observation: a randomized educational research study on laparoscopic suture skills. Surg Laparosc Endosc Percutan Tech 10:319322 29. Masters RS, Lo CY, Maxwell JP et al (2008) Implicit motor learning in surgery: implications for multi-tasking. Surgery 143:140145 30. Stefanidis D, Korndorffer JR Jr, Sierra R et al (2005) Skill retention following prociency-based laparoscopic simulator training. Surgery 138:165170 31. Andreatta PB, Woodrum DT, Birkmeyer JT et al (2006) Laparoscopic skills are improved with LapMentor training: results of a randomized, double-blinded study. Ann Surg 243:854860; discussion 860863

References
1. Canes D, Desai MM, Aron M et al (2008) Transumbilical singleport surgery: evolution and current status. Eur Urol 54:10201029 2. Pearl JP, Ponsky JL (2008) Natural orice translumenal endoscopic surgery: a critical review. J Gastrointest Surg 12:12931300 3. Wu C, Prachand VN (2008) Reverse NOTES: a hybrid technique of laparoscopic and endoscopic retrieval of an ingested foreign body. JSLS 12:395398 4. Gorman PJ, Meier AH, Rawn C et al (2000) The future of medical education is no longer blood and guts, it is bits and bytes. Am J Surg 180:353356 5. Rehrig ST, Powers K, Jones DB (2008) Integrating simulation in surgery as a teaching tool and credentialing standard. J Gastrointest Surg 12:222233 6. Scott DJ, Cendan JC, Pugh CM et al (2008) The changing face of surgical education: simulation as the new paradigm. J Surg Res 147:189193 7. Ahlberg G, Hultcrantz R, Jaramillo E et al (2005) Virtual reality colonoscopy simulation: a compulsory practice for the future colonoscopist? Endoscopy 37:11981204 8. Gallagher AG, Ritter EM, Champion H et al (2005) Virtual reality simulation for the operating room: prociency-based training as a paradigm shift in surgical skills training. Ann Surg 241:364372 9. Gurusamy KS, Aggarwal R, Palanivelu L et al (2009) Virtual reality training for surgical trainees in laparoscopic surgery. Cochrane Database Syst Rev (1) Art. No.: CD006575. doi: 10.1002/14651858.CD006575.pub2 10. Seymour NE (2008) VR to OR: a review of the evidence that virtual reality simulation improves operating room performance. World J Surg 32:182188 11. Sturm LP, Windsor JA, Cosman PH et al (2008) A systematic review of skills transfer after surgical simulation training. Ann Surg 248:166179 12. Hogle NJ, Widmann WD, Ude AO et al (2008) Does training novices to criteria and does rapid acquisition of skills on laparoscopic simulators have predictive validity or are we just playing video games? J Surg Educ 65:431435 13. Snyder CW, Vandromme MJ, Tyra SL et al (2009) Prociencybased laparoscopic and endoscopic training with virtual reality simulators: a comparison of proctored and independent approaches. J Surg Educ 66:201207 14. Halsted WS (1904) The training of the surgeon. Bull Johns Hopkins Hosp 15:267276

123

You might also like