Modeling Speech Emotion With Label Variance and Analyzing Performance Across Speakers and Unseen Acoustic Conditions
AuthorsVikramjit Mitra, Amrit Romana, Dung T. Tran, Erdrin Azemi
AuthorsVikramjit Mitra, Amrit Romana, Dung T. Tran, Erdrin Azemi
Spontaneous speech emotion data usually contain perceptual grades where graders assign emotion score after listening to the speech files. Such perceptual grades introduce uncertainty in labels due to grader opinion variation. Grader variation is addressed by using consensus grades as groundtruth, where the emotion with the highest vote is selected, and as a consequence fails to consider ambiguous instances where a speech sample may contain multiple emotions, as captured through grader opinion uncertainty. We demonstrate that using the probability density function of the emotion grades as targets instead of the commonly used consensus grades, provide better performance on benchmark evaluation sets compared to results reported in the literature. We investigate a saliency driven foundation model (FM) representation selection to train a multi-task speech emotion model and demonstrate state-of-the-art performance on both dimensional and categorical emotion recognition. Comparing representations obtained from different FMs, we observed that focusing on overall test-set performance can be deceiving, as it may fail to reveal the models generalization capacity across speakers and gender. We demonstrate that performance evaluation across multiple test-sets and performance analysis across gender and speakers are useful in assessing real-world usefulness of emotion models. Finally, we demonstrate that label uncertainty and data-skew pose a significant challenge to model evaluation, where instead of using the best-hypothesis, it is useful to consider the 2-best or 3-best hypothesis.