Presentations (Communicative Events)

Automation of Summary Evaluation by the Pyramid Method

Nenkova, Ani; Passonneau, Rebecca; Harnly, Aaron; Rambow, Owen C.

The manual Pyramid method for summary evaluation, which focuses on the task of determining if a summary expresses the same content as a set of manual models, has shown sufficient promise that the Document Understanding Conference 2005 effort will make use of it. However, an automated approach would make the method far more useful for developers and evaluators of automated summarization systems. We present an experimental environment for testing automated evaluation of summaries, pre-annotated for shared information. We reduce the problem to a combination of similarity measure computation and clustering. The best results are achieved with a unigram overlap similarity measure and singlelink clustering, which yields high correlation to manual pyramid scores (r=0.942, p=0.01), and shows better correlation than the n-gram overlap automatic approaches of the ROUGE system.

Files

More About This Work

Academic Units
Computer Science
Publisher
Recent Advances in Natural Language Processing (RANLP)
Published Here
June 1, 2013