Home

Audio-Based Semantic Concept Classification for Consumer Video

Keansub Lee; Daniel P. W. Ellis

Title:
Audio-Based Semantic Concept Classification for Consumer Video
Author(s):
Lee, Keansub
Ellis, Daniel P. W.
Date:
Type:
Articles
Department:
Electrical Engineering
Volume:
18
Permanent URL:
Book/Journal Title:
IEEE Transactions on Audio, Speech, and Language Processing
Abstract:
This paper presents a novel method for automatically classifying consumer video clips based on their soundtracks. We use a set of 25 overlapping semantic classes, chosen for their usefulness to users, viability of automatic detection and of annotator labeling, and sufficiency of representation in available video collections. A set of 1873 videos from real users has been annotated with these concepts. Starting with a basic representation of each video clip as a sequence of mel-frequency cepstral coefficient (MFCC) frames, we experiment with three clip-level representations: single Gaussian modeling, Gaussian mixture modeling, and probabilistic latent semantic analysis of a Gaussian component histogram. Using such summary features, we produce support vector machine (SVM) classifiers based on the Kullback-Leibler, Bhattacharyya, or Mahalanobis distance measures. Quantitative evaluation shows that our approaches are effective for detecting interesting concepts in a large collection of real-world consumer video clips.
Subject(s):
Electrical engineering
Acoustics
Publisher DOI:
http://dx.doi.org/10.1109/TASL.2009.2034776
Item views:
371
Metadata:
text | xml

In Partnership with the Center for Digital Research and Scholarship at Columbia University Libraries/Information Services | Terms of Use