More Speaking or More Speakers?
In collaboration with Carnegie Mellon University
AuthorsDan Berrebbi, Ronan Collobert, Navdeep Jaitly, Tatiana Likhomanenko
More Speaking or More Speakers?
In collaboration with Carnegie Mellon University
AuthorsDan Berrebbi, Ronan Collobert, Navdeep Jaitly, Tatiana Likhomanenko
Self-training (ST) and self-supervised learning (SSL) methods have demonstrated strong improvements in automatic speech recognition (ASR). In spite of these advances, to the best of our knowledge, there is no analysis of how the composition of the labelled and unlabelled datasets used in these methods affects the results. In this work we aim to analyse the effect of number of speakers in the training data on a recent SSL algorithm (wav2vec 2.0), and a recent ST algorithm (slimIPL). We perform a systematic analysis on both labeled and unlabeled data by varying the number of speakers while keeping the number of hours fixed and vice versa. Our findings suggest that SSL requires a large amount of unlabeled data to produce high accuracy results, while ST requires a sufficient number of speakers in the labelled data, especially in the low-regime setting. In this manner these two approaches improve supervised learning in different regimes of dataset composition.
Leveraging Audio-Visual Data to Reduce the Multilingual Gap in Self-Supervised Speech Models
September 25, 2025research area Speech and Natural Language Processing
Self-supervised learning (SSL) has made significant advances in speech representation learning. Models like wav2vec 2.0 and HuBERT have achieved state-of-the-art results in tasks such as speech recognition, particularly in monolingual settings. However, multilingual SSL models tend to underperform their monolingual counterparts on each individual language, especially in multilingual scenarios with few languages such as the bilingual setting. In…
Elastic Weight Consolidation Improves the Robustness of Self-Supervised Learning Methods under Transfer
November 15, 2022research area Computer Vision, research area FairnessWorkshop at NeurIPS
This paper was accepted at the workshop “Self-Supervised Learning - Theory and Practice” at NeurIPS 2022.
Self-supervised representation learning (SSL) methods provide an effective label-free initial condition for fine-tuning downstream tasks. However, in numerous realistic scenarios, the downstream task might be biased with respect to the target label distribution. This in turn moves the learned fine-tuned model posterior away from the initial…