Audio-Based Semantic Concept Classification for Consumer Video
- Audio-Based Semantic Concept Classification for Consumer Video
- Lee, Keansub
Ellis, Daniel P. W.
- Electrical Engineering
- Persistent URL:
- Book/Journal Title:
- IEEE Transactions on Audio, Speech, and Language Processing
- This paper presents a novel method for automatically classifying consumer video clips based on their soundtracks. We use a set of 25 overlapping semantic classes, chosen for their usefulness to users, viability of automatic detection and of annotator labeling, and sufficiency of representation in available video collections. A set of 1873 videos from real users has been annotated with these concepts. Starting with a basic representation of each video clip as a sequence of mel-frequency cepstral coefficient (MFCC) frames, we experiment with three clip-level representations: single Gaussian modeling, Gaussian mixture modeling, and probabilistic latent semantic analysis of a Gaussian component histogram. Using such summary features, we produce support vector machine (SVM) classifiers based on the Kullback-Leibler, Bhattacharyya, or Mahalanobis distance measures. Quantitative evaluation shows that our approaches are effective for detecting interesting concepts in a large collection of real-world consumer video clips.
- Electrical engineering
- Publisher DOI:
- Item views
text | xml
- Suggested Citation:
- Keansub Lee, Daniel P. W. Ellis, 2010, Audio-Based Semantic Concept Classification for Consumer Video, Columbia University Academic Commons, http://hdl.handle.net/10022/AC:P:11780.