View publication

Self-training (ST) and self-supervised learning (SSL) methods have demonstrated strong improvements in automatic speech recognition (ASR). In spite of these advances, to the best of our knowledge, there is no analysis of how the composition of the labelled and unlabelled datasets used in these methods affects the results. In this work we aim to analyse the effect of number of speakers in the training data on a recent SSL algorithm (wav2vec 2.0), and a recent ST algorithm (slimIPL). We perform a systematic analysis on both labeled and unlabeled data by varying the number of speakers while keeping the number of hours fixed and vice versa. Our findings suggest that SSL requires a large amount of unlabeled data to produce high accuracy results, while ST requires a sufficient number of speakers in the labelled data, especially in the low-regime setting. In this manner these two approaches improve supervised learning in different regimes of dataset composition.

Image of WER heat maps
Word error rate (WER) heatmaps on LibriSpeech validation set for SSL with wav2vec 2.0 Base (column 1) and Large (column 2), and ST with slimIPL (column 3). We consider WER as a function of 4 variables: unlabeled hours, unlabeled speakers, labeled hours and labeled speakers, and plot heatmap for a pair of variables while we average over the other two variables. We observe that i) for low-resource settings, it is critical to have a sufficient number of speakers in labeled set; ii) the improvement from increasing the number of speakers in both labeled and unlabeled data plateaus after a certain threshold; iii) it is critical for SSL to have enough unlabeled data, while for ST it is critical to have enough speakers in the labeled data.

Related readings and updates.

Continuous Pseudo-Labeling from the Start

Self-training (ST), or pseudo-labeling has sparked significant interest in the automatic speech recognition (ASR) community recently because of its success in harnessing unlabeled data. Unlike prior semi-supervised learning approaches that relied on iteratively regenerating pseudo-labels (PLs) from a trained model and using them to train a new model, recent state-of-the-art methods perform ‘continuous training’ where PLs are generated using a…
See paper details

Elastic Weight Consolidation Improves the Robustness of Self-Supervised Learning Methods under Transfer

This paper was accepted at the workshop "Self-Supervised Learning - Theory and Practice" at NeurIPS 2022. Self-supervised representation learning (SSL) methods provide an effective label-free initial condition for fine-tuning downstream tasks. However, in numerous realistic scenarios, the downstream task might be biased with respect to the target label distribution. This in turn moves the learned fine-tuned model posterior away from the initial…
See paper details