We present Spatial LibriSpeech, a spatial audio dataset with over 570 hours of 19-channel audio, first-order ambisonics, and optional distractor noise. Spatial LibriSpeech is designed for machine learning model training, and it includes labels for source position, speaking direction, room acoustics and geometry. Spatial LibriSpeech is generated by augmenting LibriSpeech samples with >220k simulated acoustic conditions across >8k synthetic rooms. To demonstrate the utility of our dataset, we train models on four fundamental spatial audio tasks, resulting in a median absolute error of 6.60° on 3D source localization, 0.43m on distance, 90.66ms on T30, and 2.74dB on direct-to-reverberant ratio estimation. We show that the same models transfer to widely-used evaluation datasets, obtaining, for instance, a median absolute error of 12.43° on 3D source localization on TUT Sound Events 2018, and 157.32ms on T30 estimation on ACE Challenge.

Related readings and updates.

Interspeech Conference 2023

Apple is sponsoring the Interspeech Conference, which will take place in person from August 20 to 24 in Dublin, Ireland. Interspeech is the world’s largest and most comprehensive conference on the science and technology of spoken language processing. Below is the schedule of Apple-sponsored workshops and events at Interspeech 2023.

See event details

Spatio-Temporal Context for Action Detection

Research in action detection has grown in the recent years, as it plays a key role in video understanding. Modelling the interactions (either spatial or temporal) between actors and their context has proven to be essential for this task. While recent works use spatial features with aggregated temporal information, this work proposes to use non-aggregated temporal information. This is done by adding an attention based method that leverages…
See paper details