Advances in Statistics

Volume 2014, Article ID 759527, 6 pages

http://dx.doi.org/10.1155/2014/759527

## On Marginal Dependencies of the 2 × 2 Kappa

Methodology and Statistics Unit, Institute of Psychology, Leiden University, P.O. Box 9555, 2300 RB Leiden, The Netherlands

Received 19 July 2014; Accepted 1 November 2014; Published 19 November 2014

Academic Editor: Lynn Kuo

Copyright © 2014 Matthijs J. Warrens. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

#### Abstract

Cohen’s kappa is a standard tool for the analysis of agreement in a 2 × 2 reliability study. Researchers are frequently only interested in the kappa-value of a sample. Various authors have observed that if two pairs of raters have the same amount of observed agreement, the pair whose marginal distributions are more similar to each other may have a lower kappa-value than the pair with more divergent marginal distributions. Here we present exact formulations of some of these properties. The results provide a better understanding of the 2 × 2 kappa for situations where it is used as a sample statistic.

#### 1. Introduction

Results from experimental studies and research studies can often be summarized in a table [1]. An example is a reliability study in which two observers rate the same sample of subjects on the presence/absence of a trait or an ability [2, 3]. In this example the four cells of the table are the proportion of times the observers agreed on the presence of the trait, the proportion of times a trait was present according to the first observer but absent according to the second observer, the proportion of times a trait was absent according to the first observer but present according to the second observer, and the proportion of times the observers agreed on the absence of the trait.

To assess the quality of the ratings, the agreement between the ratings is taken as an indicator of the quality of the category definitions and the observers’ ability to apply them. A standard tool for estimating agreement in a reliability study is Cohen’s kappa [4–8]. Its value is 1 when there is perfect agreement, 0 when agreement is equal to that expected under independence, and negative when agreement is less than expected by chance.

Several authors have presented population models for Cohen’s kappa [2, 7]. Under these models kappa can be interpreted as an association coefficient. However, kappa is also frequently used as a sample statistic [4, 8–11], for example, when calculating kappa for a sample of subjects is one step in a series of research steps. In this case, researchers are merely interested in the agreement in the sample not that of a population.

As a sample statistic, kappa is known to be marginal or prevalence dependent since it takes the marginal totals with which raters use the rating categories into account [12–14]. The value of kappa depends on the prevalence of the condition being diagnosed. Values of kappa can be quite low if a condition is quite common or very rare. Various authors have shown that if two pairs of observers have the same amount of observed agreement, the pair whose marginal distributions are more similar to each other may have a lower kappa-value than the pair with more divergent marginal distributions [4, 9, 15, 16]. Since observers with similar marginal distributions usually have a higher amount of agreement expected to occur by chance, a fixed amount of observed agreement will lead to a lower kappa-value due to the definition of the statistic [14].

The marginal dependence of Cohen’s kappa has almost exclusively been demonstrated and described by means of examples of tables [4, 8–11, 16]. However, for better understanding of the behavior of the kappa as a sample statistic, it is desirable to have exact formulations of its marginal dependence. Such formulations are presented in this paper. The paper is organized as follows. Section 2 is used to introduce notation and define Cohen’s kappa. In Section 3 several concepts for a table are presented. The main results are presented in Section 4. The results show that the kappa may exhibit several different forms of marginal dependence. The results do not necessarily suggest that Cohen’s kappa should be discarded as an agreement measure. Instead, the exact formulations provide a better understanding of the statistic. Section 5 contains several conclusions.

#### 2. Notation and Kappa

In this section we introduce notation and define the kappa coefficient. Suppose two fixed observers independently rate the same set of subjects using the same two categories 1 and 0. For example, 1 = presence and 0 = absence of a trait. For a sample of subjects, let , , , and denote, respectively, the proportion classified in category 1 by both observers, the proportion classified by the first observer in category 1 and by the second observer in category 0, the proportion classified by the first observer in category 0 and by the second observer in category 1, and the proportion classified in category 0 by both observers.

A general table with observed proportions, denoted by , is presented in Table 1. The row and column totals are the marginal totals that result from summing the relative frequencies. We denote these by and for rater and by and for rater . They reflect how often the observers used the categories.

Using the notation presented in Table 1, the proportion of observed agreement is given by , while the proportion of expected agreement is given by . Furthermore, Cohen’s kappa can be defined as Cohen’s kappa is a chance-corrected coefficient [17]. The value of kappa is 1 when perfect agreement between the two observers occurs, 0 when agreement is equal to that expected under independence (), and negative when agreement is less than expected by chance.

#### 3. Concepts for a Table

In this section we introduce several concepts for a table . The 2-tuples and contain the marginal distributions.

*Definition 1. *Two tuples and are said to be(i)*similarly arranged* if both increase (i.e., and ) or both decrease (i.e., and );(ii)*oppositely arranged* if one increases and the other decreases.Furthermore, a tuple is said to be* balanced* if .

In the following definition we use the concepts from Definition 1 to define terminology that will be used to formalize the marginal dependencies of kappa.

*Definition 2. *A table is said to be(i)*strongly marginal symmetric* if (and hence );(ii)*weakly marginal symmetric* if and are similarly arranged;(iii)*balanced* if or is balanced;(iv)*marginal asymmetric* if and are oppositely arranged.

Note that strong marginal symmetry implies weak marginal symmetry. Furthermore, strong marginal symmetry coincides with the usual definition of a symmetric matrix.

The following result relates some of the concepts in Definition 2 to the value of the chance-expected agreement . Lemma 3 is used in the proof of Theorem 5.

Lemma 3. *For a table with marginal distributions and the following equivalences hold.*(1)* is weakly marginal symmetric ;*(2)* is balanced ;*(3)* is marginal symmetric .*

*Proof. *We prove equivalence 1. The other equivalences follow from using similar arguments.

() If is weakly marginal symmetric, and are similarly arranged and we have , or equivalently
Adding to both sides of inequality (2) and dividing the result by 2, we obtain

() If then must hold using the same steps as in () backwards. Hence, and are similarly arranged, and it follows that is weakly marginal symmetric.

*4. Main Results*

*In this section we present several marginal dependencies of Cohen’s kappa (Theorems 5, 7, and 10). The following lemma will be used repeatedly.*

*Lemma 4. Coefficient (1) is strictly decreasing in .*

*Proof. *The first order partial derivative
is negative for .

*The following result is a slightly stronger version of a theorem in Warrens [16] for a rating scale with two categories. Theorem 5 shows that, for a fixed value of the proportion of observed agreement , tables that possess weak marginal symmetry produce lower values of kappa than tables that are marginal asymmetric.*

*Theorem 5. Let , , and be tables with the same value of that are, respectively, weakly marginal symmetric, balanced, and marginal asymmetric. Furthermore, let , , and denote the associated values of kappa. Then .*

*Proof. *Lemma 4 shows that is strictly decreasing in . The result then follows from application of Lemma 3.

*Example 6 illustrates Theorem 5.*

*Example 6. *Consider the three hypothetical tables in Table 2.

Each table has the same proportion of observed agreement . Table 2(a) is strongly marginal symmetric, Table 2(b) is balanced, and Table 2(c) is marginal asymmetric. We have the double inequality , which illustrates Theorem 5.

*Theorem 5 also considers tables that have asymmetric marginals. However, asymmetric tables may not be relevant in practice. If the classifications are hard to make researchers will often make use of expert observers. Furthermore, novice observers usually receive some training before the actual classifications have to be made. Asymmetric tables are therefore rarely encountered in practice.*

*Theorem 7 shows that tables that are symmetric may have lower kappa-values than tables with unequal marginal distributions.*

*Theorem 7. Let be a weakly marginal symmetric table with marginals and and value and strongly marginal symmetric with marginals and value . Furthermore, suppose that and have the same proportion of observed agreement . If
then .*

*Proof. *Due to the symmetries of the proportion of expected agreement , we may assume, without loss of generality, that , , and . It then follows from inequality (5) that . Furthermore, since , we have , or , and since , we also have .

Since , multiplying both sides of inequality by yields
Adding to both sides of inequality (6) we obtain
or equivalently,
Next, since , multiplying both sides of the inequality by we obtain
Adding to both sides of inequality (9) we obtain
or equivalently,
Combining (8) and (11) we obtain the inequality
The inequality then follows from inequality (12) and application of Lemma 4.

*Example 8 illustrates Theorem 7.*

*Example 8. *Consider the two hypothetical tables in Table 3.

Both tables have the same proportion of observed agreement . Table 3(a) is weakly marginal symmetric, whereas Table 3(b) is strongly marginal symmetric. Since
the inequalities in (5) hold. Furthermore, we have , which illustrates Theorem 7.

*Using similar arguments as in the proof of Theorem 7 we may obtain the following result.*

*Theorem 9. Let be a weakly marginal symmetric table with marginals and and -value and strongly marginal symmetric with marginals and -value . Furthermore, suppose that and have the same proportion of observed agreement . If
then .*

*If we have , , and then Theorems 7 and 9 cover the cases that and . The cases and turn out to be more complicated.*

*Another marginal dependence of kappa is presented in Theorem 10. The theorem shows that, for a constant value of the proportion of observed agreement , tables that exhibit weak marginal symmetry may produce higher kappa-values than tables with strong marginal symmetry. Theorem 10 is similar to Theorem 7, but conditions (5) and (15) are different, unrelated requirements.*

*Theorem 10. Let be a weakly marginal symmetric table with marginals and and value and strongly marginal symmetric with marginals and value . Furthermore, suppose that and have the same proportion of observed agreement . If
then .*

*Proof. *Due to the symmetries of the proportion of expected agreement , we may assume, without loss of generality, that and . It then follows from inequality (15) that . Furthermore, since is weakly marginal symmetric we must have and . It follows that , and thus .

For the function is concave with a maximum at . Since , we have
or equivalently,
Next, since a square is nonnegative we have , or
Adding to both sides of inequality (18) we obtain
Combining inequalities (17) and (19) we obtain
Adding 1 to both sides of inequality (20) we obtain
or equivalently,
The inequality then follows from inequality (22) and application of Lemma 4.

*Example 11 illustrates Theorem 10 and a corollary of Theorem 10.*

*Example 11. *Consider the four hypothetical tables in Table 4.

Each table has the same proportion of observed agreement . Tables 4(a) and 4(d) are strongly marginal symmetric, whereas Tables 4(b) and 4(c) are weakly marginal symmetric. For the largest marginals of Tables 4(a) and 4(b), we have and , which illustrates Theorem 10.

For many tables from the literature the converse of Theorem 10 also holds. However, Tables 4(a) and 4(c) provide us with a counterexample that illustrates that the converse does not hold in general. We have , but not .

Finally, Tables 4(a) and 4(d) illustrate a special application of Theorem 10. If two tables are strongly marginal symmetric and have the same proportion of observed agreement , then the table with the most skewed (unbalanced) marginals (Table 4(a)) has the lowest value of kappa. This is illustrated by the fact that .

*5. Conclusions*

*Cohen’s kappa is presently a standard tool for the analysis of agreement in a reliability study. The statistic is frequently used as a sample statistic. Various authors have observed in this context that if two pairs of raters have the same amount of observed agreement, the pair whose marginal distributions are more similar to each other may have a lower kappa-value than the pair with more divergent marginal distributions. These properties of Cohen’s kappa have almost exclusively been demonstrated and described by means of examples of tables [4, 8–11, 16]. In this paper we presented exact formulations and proved several marginal dependencies of this type (Theorems 5, 7, and 10). In general, they show that, for tables with the same value of observed agreement, tables with marginal distributions that are more similar have lower associated kappa-values than tables with marginal distributions that are less similar. Each result was illustrated by an example with hypothetical tables. The results provide a better understanding of the kappa when it is used as a sample statistic.*

*Theorem 5 considers tables that have asymmetric marginals. Although several authors have provided examples with asymmetric marginals, asymmetric tables may not be relevant in practice. If the classifications are hard to make researchers will often make use of expert observers. Furthermore, novice observers usually receive some training before the actual classifications have to be made. Asymmetric tables are therefore rarely encountered in practice.*

*Vach [14] emphasizes that kappa should not simply be interpreted as a measure of agreement but that Cohen’s kappa expresses the degree to which observed agreement exceeds the agreement that was expected by chance. The marginal dependencies are a direct consequence of the definition of kappa and its aim to adjust the observed agreement with respect to the expected amount of agreement under chance conditions [14, p. 659]. It is not a reason for discarding Cohen’s kappa.*

*Conflict of Interests*

*The author declares that there is no conflict of interests regarding the publication of this paper.*

*Acknowledgments*

*The author thanks two anonymous reviewers for their helpful comments and valuable suggestions on an earlier version of this paper. This research is part of Veni project 451-11-026 funded by the Netherlands Organisation for Scientific Research.*

*References*

- M. J. Warrens, “On association coefficients for 2 × 2 tables and properties that do not depend on the marginal distributions,”
*Psychometrika*, vol. 73, no. 4, pp. 777–789, 2008. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at Scopus - D. A. Bloch and H. C. Kraemer, “$2\times 2$ kappa coefficients: measures of agreement or association,”
*Biometrics*, vol. 45, no. 1, pp. 269–287, 1989. View at Publisher · View at Google Scholar · View at Scopus - J. L. Fleiss, “Measuring agreement between two judges on the presence or absence of a trait,”
*Biometrics*, vol. 31, no. 3, pp. 651–659, 1975. View at Google Scholar · View at MathSciNet · View at Scopus - R. L. Brennan and D. J. Prediger, “Coefficient kappa: some uses, misuses, and alternatives,”
*Educational and Psychological Measurement*, vol. 41, pp. 687–699, 1981. View at Google Scholar - J. Cohen, “A coefficient of agreement for nominal scales,”
*Educational and Psychological Measurement*, vol. 20, pp. 37–46, 1960. View at Publisher · View at Google Scholar - L. M. Hsu and R. Field, “Interrater agreement measures: comments on ${\text{kappa}}_{\text{n}}$, Cohen's kappa, Scott's
*π*, and Aickin's*α*,”*Understanding Statistics*, vol. 2, no. 3, pp. 205–219, 2003. View at Publisher · View at Google Scholar - H. C. Kraemer, “Ramifications of a population model for
*κ*as a coefficient of reliability,”*Psychometrika*, vol. 44, no. 4, pp. 461–472, 1979. View at Publisher · View at Google Scholar · View at Scopus - J. Sim and C. C. Wright, “The kappa statistic in reliability studies: use, interpretation, and sample size requirements,”
*Physical Therapy*, vol. 85, no. 3, pp. 257–268, 2005. View at Google Scholar · View at Scopus - A. R. Feinstein and D. V. Cicchetti, “High agreement but low kappa: I. the problems of two paradoxes,”
*Journal of Clinical Epidemiology*, vol. 43, no. 6, pp. 543–549, 1990. View at Publisher · View at Google Scholar · View at Scopus - C. A. Lantz and E. Nebenzahl, “Behavior and interpretation of the
*κ*statistic: resolution of the two paradoxes,”*Journal of Clinical Epidemiology*, vol. 49, no. 4, pp. 431–434, 1996. View at Publisher · View at Google Scholar · View at Scopus - M. Maclure and W. C. Willett, “Misinterpretation and misuse of the Kappa statistic,”
*American Journal of Epidemiology*, vol. 126, no. 2, pp. 161–169, 1987. View at Publisher · View at Google Scholar · View at Scopus - A. Von Eye and M. Von Eye, “On the marginal dependency of Cohen's
*κ*,”*European Psychologist*, vol. 13, no. 4, pp. 305–315, 2008. View at Publisher · View at Google Scholar · View at Scopus - W. D. Thompson and S. D. Walter, “A reappraisal of the kappa coefficient,”
*Journal of Clinical Epidemiology*, vol. 41, no. 10, pp. 949–958, 1988. View at Publisher · View at Google Scholar · View at Scopus - W. Vach, “The dependence of Cohen's kappa on the prevalence does not matter,”
*Journal of Clinical Epidemiology*, vol. 58, no. 7, pp. 655–661, 2005. View at Publisher · View at Google Scholar · View at Scopus - T. Byrt, J. Bishop, and J. B. Carlin, “Bias, prevalence and kappa,”
*Journal of Clinical Epidemiology*, vol. 46, no. 5, pp. 423–429, 1993. View at Publisher · View at Google Scholar · View at Scopus - M. J. Warrens, “A formal proof of a paradox associated with Cohen's kappa,”
*Journal of Classification*, vol. 27, no. 3, pp. 322–332, 2010. View at Publisher · View at Google Scholar · View at Scopus - M. J. Warrens, “On similarity coefficients for $2\times 2$ tables and correction for chance,”
*Psychometrika*, vol. 73, no. 3, pp. 487–502, 2008. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus

*
*