Home

Evaluating Content Selection in Human- or Machine-Generated Summaries: The Pyramid Scoring Method

Rebecca J. Passonneau; Ani Nenkova

Title:
Evaluating Content Selection in Human- or Machine-Generated Summaries: The Pyramid Scoring Method
Author(s):
Passonneau, Rebecca J.
Nenkova, Ani
Date:
Type:
Technical reports
Department:
Computer Science
Permanent URL:
Series:
Columbia University Computer Science Technical Reports
Part Number:
CUCS-025-03
Publisher:
Department of Computer Science, Columbia University
Publisher Location:
New York
Abstract:
From the outset of automated generation of summaries, the difficulty of evaluation has been widely discussed. Despite many promising attempts, we believe it remains an unsolved problem. Here we present a method for scoring the content of summaries of any length against a weighted inventory of content units, which we refer to as a pyramid. Our method is derived from empirical analysis of human-generated summaries, and provides an informative metric for human or machine-generated summaries.
Subject(s):
Computer science
Item views:
124
Metadata:
text | xml

In Partnership with the Center for Digital Research and Scholarship at Columbia University Libraries/Information Services | Terms of Use