Table of Contents Author Guidelines Submit a Manuscript
Anesthesiology Research and Practice
Volume 2016 (2016), Article ID 9348478, 13 pages
http://dx.doi.org/10.1155/2016/9348478
Research Article

Development and Testing of Screen-Based and Psychometric Instruments for Assessing Resident Performance in an Operating Room Simulator

1Department of Anesthesiology, University of Miami, Ryder Trauma Center, 1800 NW 10 Avenue, Miami, FL 33136, USA
2Department of Biomedical Engineering, University of Miami, Ryder Trauma Center, 1800 NW 10 Avenue, Miami, FL 33136, USA
3Music Engineering Technology, University of Miami, Frost School of Music, 1550 Brescia Avenue, Founder’s Hall Rm 140, Coral Gables, FL 33146, USA

Received 15 January 2016; Accepted 29 March 2016

Academic Editor: Alex Macario

Copyright © 2016 Richard R. McNeer et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Abstract

Introduction. Medical simulators are used for assessing clinical skills and increasingly for testing hypotheses. We developed and tested an approach for assessing performance in anesthesia residents using screen-based simulation that ensures expert raters remain blinded to subject identity and experimental condition. Methods. Twenty anesthesia residents managed emergencies in an operating room simulator by logging actions through a custom graphical user interface. Two expert raters rated performance based on these entries using custom Global Rating Scale (GRS) and Crisis Management Checklist (CMC) instruments. Interrater reliability was measured by calculating intraclass correlation coefficients (ICC), and internal consistency of the instruments was assessed with Cronbach’s alpha. Agreement between GRS and CMC was measured using Spearman rank correlation (SRC). Results. Interrater agreement (GRS: ICC = 0.825, CMC: ICC = 0.878) and internal consistency (GRS: alpha = 0.838, CMC: alpha = 0.886) were good for both instruments. Subscale analysis indicated that several instrument items can be discarded. GRS and CMC scores were highly correlated (SRC = 0.948). Conclusions. In this pilot study, we demonstrated that screen-based simulation can allow blinded assessment of performance. GRS and CMC instruments demonstrated good rater agreement and internal consistency. We plan to further test construct validity of our instruments by measuring performance in our simulator as a function of training level.