StereoFoley: Object-Aware Stereo Audio Generation from Video
AuthorsTornike Karchkhadze†**, Kuan-Lin Chen, Mojtaba Heydari, Robert Henzel, Alessandro Toso, Mehrez Souden, Joshua Atkins
StereoFoley: Object-Aware Stereo Audio Generation from Video
AuthorsTornike Karchkhadze†**, Kuan-Lin Chen, Mojtaba Heydari, Robert Henzel, Alessandro Toso, Mehrez Souden, Joshua Atkins
We present StereoFoley, a video-to-audio generation framework that produces semantically aligned, temporally synchronized, and spatially accurate stereo sound at 48 kHz. While recent generative video-to-audio models achieve strong semantic and temporal fidelity, they largely remain limited to mono or fail to deliver object-aware stereo imaging, constrained by the lack of professionally mixed, spatially accurate video-to-audio datasets. First, we develop and train a base model that generates stereo audio from video, achieving state-of-the-art in both semantic accuracy and synchronization. Next, to overcome dataset limitations, we introduce a synthetic data generation pipeline that combines video analysis, object tracking, and audio synthesis with dynamic panning and distance-based loudness controls, enabling spatially accurate object-aware sound. Finally, we fine-tune the base model on this synthetic dataset, yielding clear object–audio correspondence. Since no established metrics exist, we introduce stereo object-awareness measures and validate it through a human listening study, showing strong correlation with perception. This work establishes the first end-to-end framework for stereo object-aware video-to-audio generation, addressing a critical gap and setting a new benchmark in the field.
ImmerseDiffusion: A Generative Spatial Audio Latent Diffusion Model
February 12, 2025research area Human-Computer Interaction, research area Speech and Natural Language Processingconference ICASSP
We introduce ImmerseDiffusion, an end-to-end generative audio model that produces 3D immersive soundscapes conditioned on the spatial, temporal, and environmental conditions of sound objects. ImmerseDiffusion is trained to generate first-order ambisonics (FOA) audio, which is a conventional spatial audio format comprising four channels that can be rendered to multichannel spatial output. The proposed generative system is composed of a spatial…
Learning Spatially-Aware Language and Audio Embeddings
December 9, 2024research area Methods and Algorithms, research area Speech and Natural Language Processingconference NeurIPS
Humans can picture a sound scene given an imprecise natural language description. For example, it is easy to imagine an acoustic environment given a phrase like “the lion roar came from right behind me!”. For a machine to have the same degree of comprehension, the machine must know what a lion is (semantic attribute), what the concept of “behind” is (spatial attribute) and how these pieces of linguistic information align with the semantic and…