Diffusion-based Unsupervised Audio-visual Speech Enhancement - INRIA - Institut National de Recherche en Informatique et en Automatique
Communication Dans Un Congrès Année : 2025

Diffusion-based Unsupervised Audio-visual Speech Enhancement

Résumé

This paper proposes a new unsupervised audio-visual speech enhancement (AVSE) approach that combines a diffusion-based audio-visual speech generative model with a non-negative matrix factorization (NMF) noise model. First, the diffusion model is pre-trained on clean speech conditioned on corresponding video data to simulate the speech generative distribution. This pre-trained model is then paired with the NMF-based noise model to estimate clean speech iteratively. Specifically, a diffusion-based posterior sampling approach is implemented within the reverse diffusion process, where after each iteration, a speech estimate is obtained and used to update the noise parameters. Experimental results confirm that the proposed AVSE approach not only outperforms its audio-only counterpart but also generalizes better than a recent supervised-generative AVSE method. Additionally, the new inference algorithm offers a better balance between inference speed and performance compared to the previous diffusion-based method. Code and demo available at: https://jeaneudesayilo.github.io/fast_UdiffSE

Fichier principal
Vignette du fichier
cmxyyzzrpvkmyrykwgbnjftrchwgjsgk.pdf (394.02 Ko) Télécharger le fichier
Origine Fichiers produits par l'(les) auteur(s)

Dates et versions

hal-04718254 , version 1 (03-10-2024)

Licence

Identifiants

Citer

Jean-Eudes Ayilo, Mostafa Sadeghi, Romain Serizel, Xavier Alameda-Pineda. Diffusion-based Unsupervised Audio-visual Speech Enhancement. International Conference on Acoustics Speech and Signal Processing (ICASSP), IEEE, Apr 2025, Hyderabad, India. ⟨hal-04718254v1⟩
71 Consultations
49 Téléchargements

Altmetric

Partager

More