2003 Reports
Evaluating Content Selection in Human- or Machine-Generated Summaries: The Pyramid Scoring Method
From the outset of automated generation of summaries, the difficulty of evaluation has been widely discussed. Despite many promising attempts, we believe it remains an unsolved problem. Here we present a method for scoring the content of summaries of any length against a weighted inventory of content units, which we refer to as a pyramid. Our method is derived from empirical analysis of human-generated summaries, and provides an informative metric for human or machine-generated summaries.
Subjects
Files
- cucs-025-03.pdf application/pdf 369 KB Download File
More About This Work
- Academic Units
- Computer Science
- Publisher
- Department of Computer Science, Columbia University
- Series
- Columbia University Computer Science Technical Reports, CUCS-025-03
- Published Here
- April 26, 2011