View publication

Self-training (ST) and self-supervised learning (SSL) methods have demonstrated strong improvements in automatic speech recognition (ASR). In spite of these advances, to the best of our knowledge, there is no analysis of how the composition of the labelled and unlabelled datasets used in these methods affects the results. In this work we aim to analyse the effect of number of speakers in the training data on a recent SSL algorithm (wav2vec 2.0), and a recent ST algorithm (slimIPL). We perform a systematic analysis on both labeled and unlabeled data by varying the number of speakers while keeping the number of hours fixed and vice versa. Our findings suggest that SSL requires a large amount of unlabeled data to produce high accuracy results, while ST requires a sufficient number of speakers in the labelled data, especially in the low-regime setting. In this manner these two approaches improve supervised learning in different regimes of dataset composition.

Image of WER heat maps
Word error rate (WER) heatmaps on LibriSpeech validation set for SSL with wav2vec 2.0 Base (column 1) and Large (column 2), and ST with slimIPL (column 3). We consider WER as a function of 4 variables: unlabeled hours, unlabeled speakers, labeled hours and labeled speakers, and plot heatmap for a pair of variables while we average over the other two variables. We observe that i) for low-resource settings, it is critical to have a sufficient number of speakers in labeled set; ii) the improvement from increasing the number of speakers in both labeled and unlabeled data plateaus after a certain threshold; iii) it is critical for SSL to have enough unlabeled data, while for ST it is critical to have enough speakers in the labeled data.

Related readings and updates.

Elastic Weight Consolidation Improves the Robustness of Self-Supervised Learning Methods under Transfer

This paper was accepted at the workshop "Self-Supervised Learning - Theory and Practice" at NeurIPS 2022. Self-supervised representation learning (SSL) methods provide an effective label-free initial condition for fine-tuning downstream tasks. However, in numerous realistic scenarios, the downstream task might be biased with respect to the target label distribution. This in turn moves the learned fine-tuned model posterior away from the initial…
See paper details

Generating Multilingual Voices Using Speaker Space Translation Based on Bilingual Speaker Data

We present progress towards bilingual Text-to-Speech which is able to transform a monolingual voice to speak a second language while preserving speaker voice quality. We demonstrate that a bilingual speaker embedding space contains a separate distribution for each language and that a simple transform in speaker space generated by the speaker embedding can be used to control the degree of accent of a synthetic voice in a language. The same…
See paper details