Inter Rater Reliability–A Few Good Resources

Inter Rater Reliability is one of those statistics I seem to need just seldom enough that I forget all the details and have to look it up every time.

Luckily, there are a few really great web sites by experts that explain it (and related concepts) really well, in language that is accessible to non-statisticians.

So rather than reinvent the wheel and write about it, I’m going to refer you to these really great sites:

If you know of any others, please share in the comments.  I’ll be happy to add to the list.

Reader Interactions

Comments

  1. Paul van Haard says

    Dear Mrs. Grace-Martin,

    Please advocate Krippendorff’s alpha and the 95%CI for the population, which can be easily calculated in R (2019 CRAN) via scripts available.

    Solid arguments in favour for this broadly applicable IRR test can be found in the papers from Krippendorff.

    kind regards
    Paul
    BioStatistician/ Clinical Biochemist

    • daniel klein says

      Before advocating Krippendorff’s alpha, note that solid arguments can be made against it (e.g., Zhao et al. 2018). Actually, all arguments for using Krippendorff’s alpha that I have come across are made by its author. I am not aware of a single paper by another author that empirically shows how alpha is superior to other inter-rater-reliability measures.

      Gwet (2014) shows that Krippendorff’s alpha is mathematically almost identical to Fleiss version of the kappa coefficient (especially when there are no missing ratings). Therefore, it shares some of the shortcomings of kappa: most notably, Krippendorff’s alpha (re-)produces the so-called high agreement low kappa paradox (cf. Feinstein and Cicchetti 1990).

      Concerning the confidence interval estimation proposed by Krippendorff has also been criticized (Zapf et al. 2016). The authors propose using the standard bootstrap method while Gwet (2014) suggests yet another variance estimator.

      Moreover, Gwet (2014) also shows how various other coefficients can be extended to multiple raters, any level of measurement, and handling missing values just like Krippendorff’s alpha. Thus, contrary to common claims, the latter is all but unique in these respects.

      References

      Gwet, K. L. 2014. Handbook of Inter-Rater Reliability: The Definitive Guide to Measuring
      the Extent of Agreement Among Raters. 4th ed. Gaithersburg, MD: Advanced
      Analytics.

      Feinstein, A. R., and D. V. Cicchetti. 1990. High agreement but low kappa: I. The
      problems of two paradoxes. Journal of Clinical Epidemiology 43: 543–549.

      Zapf, A., S. Castell, L. Morawietz, and A. Karch. 2016. Measuring inter-rater reliability
      for nominal data—Which coefficients and confidence intervals are appropriate? BMC
      Medical Research Methodology 16: 93

      Zhao, X., Feng, G., Liu, J., and Ke Deng. 2018. We agreed to measure agreement – Redefining reliability de-justifies Krippendorff’s
      alpha. China Media Research, 14: 1-15. (Authors’ final version:
      https://pdfs.semanticscholar.org/7797/4ed9a814bb6f647ea10b1d74fde3bc061016.pdf)

  2. Sean Murphy says

    Conceptual, not computation focused overview…
    http://www.ncbi.nlm.nih.gov/pmc/articles/PMC3402032/

    Good overview of reliability in general and how IRR fits into that picture…
    http://www.personality-project.org/r/book/Chapter7.pdf

    Computational resources and explanations using Excel.
    http://www.real-statistics.com/reliability/

    Not free, but the book here is worth getting if you need to think through reliability in greater depth. There are a few free R functions available on the site for computing various IRR statistics and importantly generalizations that handle missing data. The R functions work well, I have not used the software. The book is quite comprehensive and well organized with worked examples in online spreadsheets.
    http://www.agreestat.com

    Also a couple of classic papers…
    Shrout and Fleiss (1979). Intraclass correlations: Uses in assessing rater reliability.
    http://www.aliquote.org/cours/2012_biomed/biblio/Shrout1979.pdf

    McGraw and Wong (1996). Forming inferences about some intraclass correlation coefficients. (Unfortunately this is behind a pay wall, but a good article if you have library access.)
    http://psycnet.apa.org/index.cfm?fa=buy.optionToBuy&id=1996-03170-003


Leave a Reply

Your email address will not be published. Required fields are marked *

Please note that, due to the large number of comments submitted, any questions on problems related to a personal study/project will not be answered. We suggest joining Statistically Speaking, where you have access to a private forum and more resources 24/7.