Publication details

Extraction of audio features specific to speech production for multimodal speaker detection

Authors

BESSON Patricia POPOVICI Vlad VESIN Jean-Marc THIRAN Jean-Philippe KUNT Murat

Year of publication 2008
Type Article in Periodical
Magazine / Source IEEE TRANSACTIONS ON MULTIMEDIA
Citation
Doi http://dx.doi.org/10.1109/TMM.2007.911302
Keywords audio features; differential evolution; multimodal; mutual information; speaker detection; speech
Description A method that exploits an information theoretic framework to extract optimized audio features using video information is presented. A simple measure of mutual information (MI) between the resulting audio and video features allows the detection of the active speaker among different candidates. This method involves the optimization of an MI-based objective function. No approximation is needed to solve this optimization problem, neither for the estimation of the probability density functions (pdfs) of the features, nor for the cost function itself. The pdfs are estimated from the samples using a nonparametric approach. The challenging optimization problem is solved using a global method: the differential evolution algorithm. Two information theoretic optimization criteria are compared and their ability to extract audio features specific to speech production is discussed. Using these specific audio features, candidate video features are then classified as member of the "speaker" or "non-speaker" class, resulting in a speaker detection scheme. As a result, our method achieves a speaker detection rate of 100% on in-house test sequences, and of 85% on most commonly used sequences.

You are running an old browser version. We recommend updating your browser to its latest version.

More info