Spatio-Temporal Context for Action Detection
AuthorsManuel Sarmiento Calderó, David Varas, Elisenda Bou-Balust
Spatio-Temporal Context for Action Detection
AuthorsManuel Sarmiento Calderó, David Varas, Elisenda Bou-Balust
Research in action detection has grown in the recent years, as it plays a key role in video understanding. Modelling the interactions (either spatial or temporal) between actors and their context has proven to be essential for this task. While recent works use spatial features with aggregated temporal information, this work proposes to use non-aggregated temporal information. This is done by adding an attention based method that leverages spatio-temporal interactions between elements in the scene along the clip. The main contribution of this work is the introduction of two cross attention blocks to effectively model the spatial relations and capture short range temporal interactions. Experiments on the AVA dataset show the advantages of the proposed approach that models spatio-temporal relations between relevant elements in the scene, outperforming other methods that model actor interactions with their context by +0.31 mAP.
ImmerseDiffusion: A Generative Spatial Audio Latent Diffusion Model
February 12, 2025research area Human-Computer Interaction, research area Speech and Natural Language Processingconference ICASSP
We introduce ImmerseDiffusion, an end-to-end generative audio model that produces 3D immersive soundscapes conditioned on the spatial, temporal, and environmental conditions of sound objects. ImmerseDiffusion is trained to generate first-order ambisonics (FOA) audio, which is a conventional spatial audio format comprising four channels that can be rendered to multichannel spatial output. The proposed generative system is composed of a spatial…
CVPR 2021
June 10, 2021research area Computer Vision
Apple sponsored the annual conference of Computer Vision and Pattern Recognition (CVPR). The conference focuses on computer vision and its applications and took place virtually from June 19 to 25.