Presentations (Communicative Events)

Computing Reliability for Coreference Annotation

Passonneau, Rebecca

Co-reference annotation is annotation of language corpora to indicate which expressions have been used to co-specify the same discourse entity. When annotations of the same data are collected from two or more coders, the reliability of the data may need to be quantified. Two obstacles have stood in the way of applying reliability metrics: incommensurate units across annotations, and lack of a convenient representation of the coding values. Given N coders and M coding units, reliability is computed from an N-by-M matrix that records the value assigned to unit Mj by coder Nk. The solution I present accommodates a wide range of coding choices for the annotator, while preserving the same units across codings. As a consequence, it permits a straightforward application of reliability measurement. In addition, in coreference annotation, disagreements can be complete or partial. The representation I propose has the advantage of incorporating a distance metric that can scale disagreements accordingly. It also allows the investigator to experiment with alternative distance metrics. Finally, the coreference representation proposed here can be useful for other tasks, such as multivariate distributional analysis. The same reliability methodology has already been applied to another coding task, namely semantic annotation of summaries.

Files

More About This Work

Academic Units
Computer Science
Publisher
Proceedings of the Language Resources and Evaluation Conference (LREC 2004)
Published Here
May 31, 2013