Table of Contents
ISRN Probability and Statistics
Volume 2012 (2012), Article ID 656390, 11 pages
http://dx.doi.org/10.5402/2012/656390
Research Article

On the Equivalence of Multirater Kappas Based on 2-Agreement and 3-Agreement with Binary Scores

Unit of Methodology and Statistics, Institute of Psychology, Leiden University, P.O. Box 9555, 2300 RB Leiden, The Netherlands

Received 7 August 2012; Accepted 25 August 2012

Academic Editors: J. Hu and O. Pons

Copyright © 2012 Matthijs J. Warrens. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Abstract

Cohen’s kappa is a popular descriptive statistic for summarizing agreement between the classifications of two raters on a nominal scale. With raters there are several views in the literature on how to define agreement. The concept of g-agreement refers to the situation in which it is decided that there is agreement if g out of m raters assign an object to the same category. Given raters we can formulate multirater kappas, one based on 2-agreement, one based on 3-agreement, and so on, and one based on m-agreement. It is shown that if the scale consists of only two categories the multi-rater kappas based on 2-agreement and 3-agreement are identical.