Leveraging Audio-Visual Data to Reduce the Multilingual Gap in Self-Supervised Speech Models
AuthorsMaría Andrea Cruz Blandón**†, Zakaria Aldeneh, Jie Chi, Maureen de Seyssel
Leveraging Audio-Visual Data to Reduce the Multilingual Gap in Self-Supervised Speech Models
AuthorsMaría Andrea Cruz Blandón**†, Zakaria Aldeneh, Jie Chi, Maureen de Seyssel
Self-supervised learning (SSL) has made significant advances in speech representation learning. Models like wav2vec 2.0 and HuBERT have achieved state-of-the-art results in tasks such as speech recognition, particularly in monolingual settings. However, multilingual SSL models tend to underperform their monolingual counterparts on each individual language, especially in multilingual scenarios with few languages such as the bilingual setting. In this work, we investigate a novel approach to reduce this performance gap by introducing limited visual grounding into bilingual speech SSL models. Our results show that visual grounding benefits both monolingual and bilingual models, with especially pronounced gains for the latter, reducing the multilingual performance gap on zero-shot phonetic discrimination from 31.5% for audio-only models to 8.04% with grounding.
Assessing the Role of Data Quality in Training Bilingual Language Models
December 11, 2025research area Data Science and Annotation, research area Speech and Natural Language Processingconference EMNLP
Bilingual and multilingual language models offer a promising path toward scaling NLP systems across diverse languages and users. However, their performance often varies wildly between languages as prior works show that adding more languages can degrade performance for some languages (such as English), while improving others (typically more data constrained languages). In this work, we investigate causes of these inconsistencies by comparing…
More Speaking or More Speakers?
March 14, 2023research area Speech and Natural Language Processingconference ICASSP
Self-training (ST) and self-supervised learning (SSL) methods have demonstrated strong improvements in automatic speech recognition (ASR). In spite of these advances, to the best of our knowledge, there is no analysis of how the composition of the labelled and unlabelled datasets used in these methods affects the results. In this work we aim to analyse the effect of number of speakers in the training data on a recent SSL algorithm (wav2vec 2.0),…