Computes several coefficients of agreement between the columns
and rows of a 2-way contingency table. Most of the documentation for this
function is copied directly from the matchClasses
function
documentation.
calc_agreement(tabble)
tabble | A 2-dimensional contingency table created with |
---|
A tibble with:
diag
corrected for agreement by chance.
Rand index corrected for agreement by chance.
Suppose we want to compare two classifications summarized by the
contingency table \(T=[t_{ij}]\) where \(i,j=1,\ldots,K\) and
\(t_{ij}\) denotes the number of data points which are in class \(i\)
in the first partition and in class \(j\) in the second partition. If
both classifications use the same labels, then obviously the two
classification agree completely if only elements in the main diagonal of
the table are non-zero. On the other hand, large off-diagonal elements
correspond to smaller agreement between the two classifications. If
match.names
is TRUE
, the class labels as given by the row and
column names are matched, i.e. only columns and rows with the same dimnames
are used for the computation.
If the two classification do not use the same set of labels, or if identical labels can have different meaning (e.g., two outcomes of cluster analysis on the same data set), then the situation is a little bit more complicated. Let \(A\) denote the number of all pairs of data points which are either put into the same cluster by both partitions or put into different clusters by both partitions. Conversely, let \(D\) denote the number of all pairs of data points that are put into one cluster in one partition, but into different clusters by the other partition. Hence, the partitions disagree for all pairs \(D\) and agree for all pairs \(A\). We can measure the agreement by the Rand index \(A/(A+D)\) which is invariant with respect to permutations of the columns or rows of \(T\).
Both indices have to be corrected for agreement by chance if the sizes of the classes are not uniform.
In addition, the Phi coefficient, Yule coefficient, Peirce's science of the
method (sometimes called the Youden's J index, though it predates Youden by
66 years), and Jaccard index are calculated. The first two are based on
the approach in the psych
package.
This function is called by confusion_matrix
, but if this is all you
want, you can simply supply the table to this function.
Cohen. J. (1960) A coefficient of agreement for nominal scales. Educational and Psychological Measurement.
Lawrence Hubert and Phipps Arabie (1985) Comparing partitions. Journal of Classification.
Stuart G Baker & Barnett S Kramer (2007) Peirce, Youden, and Receiver Operating Characteristic Curves, The American Statistician.
Peirce, C. S. (1884) "The Numerical Measure of the Success of Predictions", Science.
## no class correlations: both kappa and crand almost zero g1 <- sample(1:5, size=1000, replace=TRUE) g2 <- sample(1:5, size=1000, replace=TRUE) tabble <- table(g1, g2) calc_agreement(tabble)#> Warning: Some association metrics may not be #> calculated due to lack of 2x2 table#> # A tibble: 1 x 6 #> Kappa `Adjusted Rand` Yule Phi Peirce Jaccard #> <dbl> <dbl> <lgl> <lgl> <lgl> <lgl> #> 1 0.00501 0.00236 NA NA NA NA## let pairs (g1=1,g2=1) and (g1=3,g2=3) agree better k <- sample(1:1000, size=200) g1[k] <- 1 g2[k] <- 1 k <- sample(1:1000, size=200) g1[k] <- 3 g2[k] <- 3 tabble <- table(g1, g2) ## both kappa and crand should be significantly larger than before calc_agreement(tabble)#> Warning: Some association metrics may not be #> calculated due to lack of 2x2 table#> # A tibble: 1 x 6 #> Kappa `Adjusted Rand` Yule Phi Peirce Jaccard #> <dbl> <dbl> <lgl> <lgl> <lgl> <lgl> #> 1 0.331 0.227 NA NA NA NA