Presentations (Communicative Events)

Subband Autocorrelation Features for Video Soundtrack Classification

Ellis, Daniel P. W.; Cotton, Courtenay V.

Inspired by prior work on stabilized auditory image features, we have developed novel auditory-model-based features that preserve the fine time structure lost in conventional frame-based features. While the original auditory model is computationally intense, we present a simpler system that runs about ten times faster but achieves equivalent performance. We use these features for video soundtrack classification with the Columbia Consumer Video dataset, showing
that the new features alone are roughly comparable to traditional MFCCs, but combining classifiers based on both features achieves a 15% improvement in mean Average Precision over the MFCC baseline.

Files

Also Published In

Title
The 38th International Conference on Acoustics, Speech, and Signal Processing

More About This Work

Academic Units
Electrical Engineering
Published Here
April 19, 2013