View publication

Research in action detection has grown in the recent years, as it plays a key role in video understanding. Modelling the interactions (either spatial or temporal) between actors and their context has proven to be essential for this task. While recent works use spatial features with aggregated temporal information, this work proposes to use non-aggregated temporal information. This is done by adding an attention based method that leverages spatio-temporal interactions between elements in the scene along the clip. The main contribution of this work is the introduction of two cross attention blocks to effectively model the spatial relations and capture short range temporal interactions. Experiments on the AVA dataset show the advantages of the proposed approach that models spatio-temporal relations between relevant elements in the scene, outperforming other methods that model actor interactions with their context by +0.31 mAP.

Related readings and updates.

Apple at CVPR 2021

Apple is sponsoring the annual conference of Computer Vision and Pattern Recognition (CVPR). The conference focuses on computer vision and its applications and is taking place virtually from June 19 to 25.

See event details

Making Mobile Applications Accessible with Machine Learning

At Apple we use machine learning to teach our products to understand the world more as humans do. Of course, understanding the world better means building great assistive experiences. Machine learning can help our products be intelligent and intuitive enough to improve the day-to-day experiences of people living with disabilities. We can build machine-learned features that support a wide range of users including those who are blind or have low vision, those who are deaf or are hard of hearing, those with physical motor limitations, and also support those with cognitive disabilities.

See article details