Table of Contents
ISRN Education
Volume 2013 (2013), Article ID 840627, 6 pages
http://dx.doi.org/10.1155/2013/840627
Research Article

Can We Share Multiple Choice Questions across Borders? Validation of the Dutch Knowledge Assessment in Family Medicine in Flanders

1Interuniversity Centre for Education in General Practice, Kapucijnenvoer 33, Block J, P.O. Box 7001, Belgium
2Department of Family Medicine and Primary Health Care, Ghent University, De Pintelaan 185-6K3, Belgium
3University of Antwerp, Campus Drie Eiken, D.R. 315, Universiteitsplein 1, 2610 Wilrijk, Belgium
4Department of Public Health and Primary Care, Academic Centre of General Practice, University of Leuven, Kapucijnenvoer 33, Block J, P.O. Box 7001, 3000 Leuven, Belgium
5Department of Public Health and Primary Care, Academic Centre of General Practice, Catholic University Leuven, Leuven, Belgium
6Department of Public Health and Primary Care, Academic Centre of General Practice, Academic Teaching Practice, Leuven, Belgium

Received 21 August 2013; Accepted 30 September 2013

Academic Editors: K. Kingsley and R. Pasnak

Copyright © 2013 Lynn Ryssaert et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Abstract

Background. One of the methods to test knowledge of Family Medicine trainees is a written exam composed of multiple choice questions. Creating high-quality multiple choice questions requires a lot of experience, knowledge, and time. This study explores the opportunity to run the Dutch knowledge assessment in Flanders as well, the use of this test for formative purposes. Methods. The study test was performed in a Flemish sample of postgraduate Family Medicine (FM) trainees and FM trainers. The Dutch test, adjusted to the Flemish context, was analyzed according to the classical test theory: difficulty factor and discriminating power of the items and reliability of the test. Results. 82 of the 154 items well divided the group into two equal parts of correct and incorrect responders. The distribution of the discrimination index, of the items with an acceptable difficulty factor, was [−0.012–0.530]. The item-test-correlation shows that 52 items do not fit, and 87 items need revision in varying degrees. The test reliability was 0.917. Conclusion. The test was highly reliable, but many MC questions appeared to be too easy and poorly discriminative. Therefore, we question the test validity and recommend reconsideration of the items based on difficulty before it is applied and used as a mandatory formative test.