Leveraging Audio-Visual Data to Reduce the Multilingual Gap in Self-Supervised Speech Models
AuthorsMaría Andrea Cruz Blandón**†, Zakaria Aldeneh, Jie Chi, Maureen de Seyssel
Leveraging Audio-Visual Data to Reduce the Multilingual Gap in Self-Supervised Speech Models
AuthorsMaría Andrea Cruz Blandón**†, Zakaria Aldeneh, Jie Chi, Maureen de Seyssel
Self-supervised learning (SSL) has made significant advances in speech representation learning. Models like wav2vec 2.0 and HuBERT have achieved state-of-the-art results in tasks such as speech recognition, particularly in monolingual settings. However, multilingual SSL models tend to underperform their monolingual counterparts on each individual language, especially in multilingual scenarios with few languages such as the bilingual setting. In this work, we investigate a novel approach to reduce this performance gap by introducing limited visual grounding into bilingual speech SSL models. Our results show that visual grounding benefits both monolingual and bilingual models, with especially pronounced gains for the latter, reducing the multilingual performance gap on zero-shot phonetic discrimination from 31.5% for audio-only models to 8.04% with grounding.
AV-CPL: Continuous Pseudo-Labeling for Audio-Visual Speech Recognition
August 12, 2024research area Methods and Algorithms, research area Speech and Natural Language Processingconference ECCV
Audio-visual speech contains synchronized audio and visual information that provides cross-modal supervision to learn representations for both automatic speech recognition (ASR) and visual speech recognition (VSR). We introduce continuous pseudo-labeling for audio-visual speech recognition (AV-CPL), a semi-supervised method to train an audio-visual speech recognition (AVSR) model on a combination of labeled and unlabeled videos with continuously…
More Speaking or More Speakers?
March 14, 2023research area Speech and Natural Language Processingconference ICASSP
Self-training (ST) and self-supervised learning (SSL) methods have demonstrated strong improvements in automatic speech recognition (ASR). In spite of these advances, to the best of our knowledge, there is no analysis of how the composition of the labelled and unlabelled datasets used in these methods affects the results. In this work we aim to analyse the effect of number of speakers in the training data on a recent SSL algorithm (wav2vec 2.0),…