Abstract

Cohen’s kappa is a popular descriptive statistic for summarizing agreement between the classifications of two raters on a nominal scale. With raters there are several views in the literature on how to define agreement. The concept of g-agreement refers to the situation in which it is decided that there is agreement if g out of m raters assign an object to the same category. Given raters we can formulate multirater kappas, one based on 2-agreement, one based on 3-agreement, and so on, and one based on m-agreement. It is shown that if the scale consists of only two categories the multi-rater kappas based on 2-agreement and 3-agreement are identical.

1. Introduction

In social sciences and medical research it is frequently required that a group of objects is rated on a nominal scale with two or more categories. The raters may be pathologists that rate the severity of lesions from scans, clinicians who classify children on asthma severity, or competing diagnostic devices that classify the extent of disease in patients. Because there is often no golden standard, analysis of the interrater data provides a useful means of assessing the reliability of the rating system. Therefore, researchers often require that the classification task is performed by raters. A standard tool for the analysis of agreement in a reliability study with raters is Cohen’s kappa [5, 28, 34], denoted by [2, 12]. The value of Cohen’s is 1 when perfect agreement between the two raters occurs, 0 when agreement is equal to that expected under independence, and negative when agreement is less than expected by chance. A value .60 may indicate good agreement, whereas a value .80 may even indicate excellent agreement [4, 16]. A variety of extensions of Cohen’s have been developed [19]. These include kappas for groups of raters [24, 25], kappas for multiple raters [15, 29], and weighted kappas [26, 30, 31]. This paper focuses on kappas for raters making judgments on a binary scale.

With multiple raters there are several views on how to define agreement [13, 21, 22]. One may decide that there is only agreement if all raters assign a subject to the same category (see, e.g., [27]). This type of agreement is referred to as simultaneous agreement, -agreement, or DeMoivre’s definition of agreement [13]. Since only one deviating rating of a subject will lead to the conclusion that there is no agreement with respect to the subject, -agreement looks especially useful in case the researchers demands are extremely high [22]. Alternatively, a researcher may decide that there is already agreement if any two raters categorize an object consistently. In this case we speak of pairwise agreement or 2-agreement. Conger [6] argued that agreement among raters can actually be considered to be an arbitrary choice along a continuum ranging from 2-agreement to -agreement. The concept of -agreement with refers to the situation in which it is decided that there is agreement if out of raters assign an object to the same category [6].

Given raters we can formulate multirater kappas, one based on 2-agreement, one based on 3-agreement, and so on, and one based on -agreement. Although all these kappas can be defined from a mathematical perspective, the multirater kappas in general produce different values (see, e.g., [32, 33]). The difficulty for a researcher is to decide which form of -agreement should be used in case one is looking for agreement between ratings when the raters are assumed to be equally skilled. Popping [22] notes that in a considerable part of the literature multirater kappas based on 2-agreement are used. Conger [6] notes that especially coefficients based on 3-agreement may be useful in case the researchers demands are slightly higher. Stronger forms of -agreement may in many practical situations be too demanding. However, it turns out that with ratings on a dichotomous scale the multirater kappas based on 2-agreement and 3-agreement are equivalent. This fact is proved in Section 3. First, Section 2 is used to introduce notation and present definitions of 2-, 3-, and 4-agreement. The multirater kappas and the main result are then presented in Section 3. Section 4 contains a discussion.

2. 2-, 3- and 4-Agreement

In this section we consider quantities of -agreement for . Suppose that observers each rate the same set of objects (individuals and observations) on a dichotomous scale. The two categories are labeled 0 and 1, meaning, for example, presence and absence of a trait or a symptom. So, the data consist of binary variables of length . Let , let , and let denote the number of times rater used category . Furthermore, let denote the number of times rater assigned an object to category and rater assigned an object to category . The quantities and are defined analogously. For notational convenience we will work with the relative frequencies , , , and .

For illustrating the concepts and results presented in this paper we use the study presented in O’Malley et al. [20]. In this study four pathologists (raters 1, 3, 5, and 8 in Figure  6 in [20]) examined images from 30 columnar cell lesions of the breast with low-grade/monomorphic-type cytologic atypia. The pathologists were instructed to categorize each as either “Flat Epithelial Atypia” (coded 1) or “Not Atypical” (coded 0). The results for each rater for all 30 cases are presented in Table 1. The 4 columns labeled 1 to 4 of Table 1 contain the ratings of the pathologists. The frequencies in the first column of Table 1 indicate how many times on a total of 30 cases a certain pattern of ratings occurred. Only five of all theoretically possible patterns of 1s and 0s are observed in these data. Values of various multirater kappas for these data are presented on the right-hand side of the table. The formulas of the multirater kappas are presented in Section 3.

We can think of the four proportions , , and as the elements of a table that summarizes the 2-agreement between raters and [10]. Proportions , , , and are quantities of 2-agreement, because they describe information between a pair of raters. In general we have Summing over the rows of this table we obtain the marginal totals and corresponding to rater .

Example 2.1. For raters 1 and 2 in Table 1 we have illustrating identity (2.1). The marginal totals indicate how often raters 1 and 2, used the categories 0 and 1.

We can think of the eight proportions as the elements of a table that summarizes the 3-agreement between raters , and . We have Summing over the direction corresponding to rater , the table collapses into the table for raters and .

Example 2.2. For raters 1, 2 and 3 in Table 1 we have and . Furthermore, we have illustrating identity (2.4).

The 2-agreement and 3-agreement quantities are related in the following way. For we have the identitiesFor example, we have . Moreover, we have an analogous set of identities for products of the marginal totals. That is, for we have the identitiesUsing the relations between the 2-agreement and 3-agreement quantities in (2.7a), (2.7b), and (2.7c) and (2.8a), (2.8b), and (2.8c) we may derive the following identities. Proposition 2.3 is used in the proof of the theorem in Section 3.

Proposition 2.3. Consider three raters , , and . One has

Proof. We can express the sum of the 2-agreement quantities: in terms of 3-agreement quantities using the identities in (2.7a), (2.7b), and (2.7c). Doing this we obtain Applying identity (2.4) to (2.12) we obtain identity (2.9). Using the identities in (2.8a), (2.8b), and (2.8c) identity (2.10) is obtained in a similar way.

We can think of the sixteen proportions as the elements of a table that summarizes the 4-agreement between raters , , , and . We have

Example 2.4. For raters 1, 2, 3, and 4 in Table 1 we have The remaining 4-agreement quantities are zero. Furthermore, we have illustrating identity (2.13).

The 3-agreement and 4-agreement quantities are related in the following way. For we have the identitiesFor example, we have . There is also an analogous set of identities for products of the marginal totals.

The identities in (2.16a), (2.16b), (2.16c), and (2.16d) do not lead to a result analogous to Proposition 2.3. We have however the following less general result.

Proposition 2.5. Consider four raters , ,, and . Suppose One has

Proof. We can express the sum of the 3-agreement quantities in terms of 4-agreement quantities using the identities in (2.16a), (2.16b), (2.16c), and (2.16d). Doing this we obtain Combining (2.13) and (2.17) we obtain the identity Applying (2.21) to (2.20) we obtain identity (2.18).

The 4-agreement quantities , , , , , and are in general not zero. Even if we would require that condition (2.17) holds, we would not obtain an identity similar to (2.18) for the products of the marginal totals.

3. Kappas Based on 2-, 3-, and 4-Agreement

In this section we present the main result. We introduce Cohen’s [5] and three multirater kappas, one based on 2-agreement, one based on 3-agreement, and one based on 4-agreement. For two raters and Cohen’s is defined as

Example 3.1. For raters 1 and 2 in Table 1 we have

There are several ways to generalize Cohen’s to the case of multiple raters. A kappa for raters based on 2-agreement between the raters is given by The in denotes that this coefficient is a measure for raters. The 2 in denotes that the coefficient is a measure of 2-agreement, since the and describe information between pairs of raters.

Coefficient is a special case of a multicategorical kappa that was first considered in Hubert [13] and has been independently proposed by Conger [6]. Hubert's kappa is also discussed in Davies and Fleiss [7], Popping [21], and Heuvelmans and Sanders [11]. Furthermore, Hubert’s kappa is a special case of the descriptive statistics discussed in Berry and Mielke [3] and Janson and Olssen [14]. Standard errors for can be found in Hubert [13].

Example 3.2. For the four raters in Table 1 we have

A kappa for raters based on 3-agreement between the raters is given by For raters we have the special case Coefficient was first considered in Von Eye and Mun [8]. It is also a special case of the weighted kappa proposed in Mielke et al. [17, 18]. The coefficient is a measure of simultaneous agreement [18]. Standard errors for can be found in [17, 18].

Example 3.3. For the four raters in Table 1 we have Interestingly, we have (Example 3.2).

Examples 3.2 and 3.3 show that the multirater kappas based on 2-agreement and 3-agreement produces identical values for the data in Table 1. This equivalence is formalized in the following result.

Theorem 3.4. for all .

Proof. Given raters, a pair of raters and occur times together in a triple of raters. Hence, using identities (2.9) and (2.10) we have Multiplying all terms in by , and using identities (3.8) in the result, we obtain Since in the denominator of (3.9), coefficient (3.9) is equivalent to .

Finally, a kappa for raters based on 4-agreement between the raters is given by The special case extends the kappa proposed in Von Eye and Mun [8] and Mielke et al. [17, 18].

Example 3.5. For the four raters in Table 1 we have Note that for these data we have (Examples 3.2 and 3.3), although the difference between the values of the multirater kappas is negligible.

4. Discussion

Cohen’s kappa is a standard tool for summarizing agreement ratings by two observers on a nominal scale. Cohen’s kappa can only be used for comparing raters at a time. Various authors have proposed extensions of Cohen's kappa for raters. The concept of -agreement with refers to the situation in which it is decided that there is agreement if out of raters assign an object to the same category [6, 22]. Given raters we can formulate multirater kappas, one based on 2-agreement, one based on 3-agreement, and so on, and one based on -agreement. Although all these kappas can be defined from a mathematical perspective, the multirater kappas in general produce different values (see, e.g., [32, 33]). In this paper we considered multirater kappas based on 2-, 3-, and 4-agreement for dichotomous ratings.

As the main result of the paper it was shown (Theorem 3.4, Section 3) that the popular concept of 2-agreement and the slightly more demanding but reasonable alternative concept of 3-agreement coincide for dichotomous (binary) scores, that is, the multirater kappas based on 2-agreement and 3-agreement are identical. Hence, for ratings on a dichotomous scale the problem of which form of agreement to use does not occur. The key properties for this equivalence are the relations between the 2-agreement and 3-agreement quantities in Proposition 2.3 (Section 2). The O’Malley et al. data in Table 1 and the hypothetical data in Table 2 show that 2/3-agreement is not equivalent to 4-agreement. This is because there is no result analogous to Proposition 2.3 between 2/3-agreement and 4-agreement quantities. The data examples in, for example, Warrens [32, 33] show that the equivalence also does not hold for multirater kappas for more than two categories. Furthermore, the data examples in Table 2 show that the 2/3-agreement and 4-agreement kappas can produce quite different values.

Another statistic that is often regarded as a generalization of Cohen’s is the multirater statistic proposed in Fleiss [9]. Artstein and Poesio [1] however showed that this statistic is actually a multirater extension of Scott’s pi [23] (see also [22]). Using instead of in we obtain a special case of the coefficient in Fleiss [9], which shows that the coefficient is a special case of Hubert’s kappa [6, 13, 29]. It is possible to formulate an analogous multirater pi coefficient based on 3-agreement. This pi coefficient is equivalent to the coefficient based on 2-agreement.

Acknowledgment

This paper is a part of project 451-11-026 funded by The Netherlands Organisation for Scientific Research.