Academic Commons

Articles

Audio-Based Semantic Concept Classification for Consumer Video

Lee, Keansub; Ellis, Daniel P. W.

This paper presents a novel method for automatically classifying consumer video clips based on their soundtracks. We use a set of 25 overlapping semantic classes, chosen for their usefulness to users, viability of automatic detection and of annotator labeling, and sufficiency of representation in available video collections. A set of 1873 videos from real users has been annotated with these concepts. Starting with a basic representation of each video clip as a sequence of mel-frequency cepstral coefficient (MFCC) frames, we experiment with three clip-level representations: single Gaussian modeling, Gaussian mixture modeling, and probabilistic latent semantic analysis of a Gaussian component histogram. Using such summary features, we produce support vector machine (SVM) classifiers based on the Kullback-Leibler, Bhattacharyya, or Mahalanobis distance measures. Quantitative evaluation shows that our approaches are effective for detecting interesting concepts in a large collection of real-world consumer video clips.

Files

Also Published In

Title
IEEE Transactions on Audio, Speech, and Language Processing
DOI
https://doi.org/10.1109/TASL.2009.2034776

More About This Work

Academic Units
Electrical Engineering
Published Here
November 9, 2011