We present Spatial LibriSpeech, a spatial audio dataset with over 570 hours of 19-channel audio, first-order ambisonics, and optional distractor noise. Spatial LibriSpeech is designed for machine learning model training, and it includes labels for source position, speaking direction, room acoustics and geometry. Spatial LibriSpeech is generated by augmenting LibriSpeech samples with >220k simulated acoustic conditions across >8k synthetic rooms. To demonstrate the utility of our dataset, we train models on four fundamental spatial audio tasks, resulting in a median absolute error of 6.60° on 3D source localization, 0.43m on distance, 90.66ms on T30, and 2.74dB on direct-to-reverberant ratio estimation. We show that the same models transfer to widely-used evaluation datasets, obtaining, for instance, a median absolute error of 12.43° on 3D source localization on TUT Sound Events 2018, and 157.32ms on T30 estimation on ACE Challenge.

Related readings and updates.

Diffusion Models as Masked Audio-Video Learners

This paper was accepted at the Machine Learning for Audio Workshop at NeurIPS 2023. Over the past several years, the synchronization between audio and visual signals has been leveraged to learn richer audio-visual representations. Aided by the large availability of unlabeled videos, many unsupervised training frameworks have demonstrated impressive results in various downstream audio and video tasks. Recently, Masked Audio-Video Learners (MAViL)…
See paper details

Interspeech Conference 2023

Apple sponsored the Interspeech Conference, which took place in person from August 20 to 24 in Dublin, Ireland. Interspeech is a conference on the science and technology of spoken language processing. Below was the schedule of Apple-sponsored workshops and events at the conference.

See event details