Articles

Towards Single-Channel Unsupervised Source Separation of Speech Mixtures: The Layered Harmonics/Formants Separation-Tracking Model

Reyes-Gomez, Manuel; Jojic, Nebojsa; Ellis, Daniel P. W.

Speaker models for blind source separation are typically based on HMMs consisting of vast numbers of states to capture source spectral variation, and trained on large amounts of isolated speech. Since observations can be similar between sources, inference relies on sequential constraints from the state transition matrix which are, however, quite weak. To avoid these problems, we propose a strategy of capturing local deformations of the time-frequency energy distribution. Since consecutive spectral frames are highly correlated, each frame can be accurately described as a nonuniform deformation of its predecessor. A smooth pattern of deformations is indicative of a single speaker, and the cliffs in the deformation fields may indicate a speaker switch. Further, the log-spectrum of speech can be decomposed into two additive layers, separately describing the harmonics and formant structure. We model smooth deformations as hidden transformation variables in both layers, using MRFs with overlapping subwindows as observations, assumed to be a noisy sum of the two layers. Loopy belief propagation provides for efficient inference. Without any pre-trained speech or speaker models, this approach can be used to fill in missing time-frequency observations, and the local entropy of the deformation fields indicate source boundaries for separation.

Files

Also Published In

Title
ISCA Tutorial and Research Workshop on Statistical and Perceptual Audio Processing, ICC Jeju, Korea, October 3, 2004
Publisher
International Speech Communication Association

More About This Work

Academic Units
Electrical Engineering
Published Here
June 28, 2012