Presentations (Communicative Events)

Augmenting the Kappa Statistic to Determine Interannotator Reliability for Multiply Labeled Data Points

Rosenberg, Andrew; Binkowski, Ed

This paper describes a method for evaluating interannotator reliability in an email corpus annotated for type (e.g., question, answer, social chat) when annotators are allowed to assign multiple labels to a message. An augmentation is proposed to Cohen's kappa statistic which permits all data to be included in the reliability measure and which further permits the identification of more or less reliably annotated data points.

Files

  • thumnail for rosenberg_binkowski_04.pdf rosenberg_binkowski_04.pdf application/pdf 65.2 KB Download File

More About This Work

Academic Units
Computer Science
Publisher
HLT-NAACL-Short '04 Proceedings of HLT-NAACL 2004: Short Papers
Published Here
May 31, 2013