Trajectory-aligned Space-time Tokens for Few-shot Action Recognition

1University of Maryland, College Park, 2GenAI, Meta

Harnessing point trackers like CoTracker and self-supervised representations like DINO, we create trajectory-aligned tokens (TATs) that capture motion and appearance information.

GIF 1 GIF 2 GIF 3 GIF 4 GIF 5 GIF 5
Point tracking visualisation on Something Something dataset. Note: Background point tracks are removed.

Abstract

We propose a simple yet effective approach for few-shot action recognition, emphasizing the disentanglement of motion and appearance representations. By harnessing recent progress in tracking, specifically point trajectories and self-supervised representation learning, we build trajectory-aligned tokens (TATs) that capture motion and appearance information. This approach significantly reduces the data requirements while retaining essential information. To process these representations, we use a Masked Space-time Transformer that effectively learns to aggregate information to facilitate few-shot action recognition. We demonstrate state-of-the-art results on few-shot action recognition across multiple datasets.

XINC framework.

Results

We show results on six datasets with few shot splits: Kinetics, SSV2 full, SSV2 small, UCF-101, HMDB-51, and FineGym. We show that our method outperforms the state-of-the-art methods on almost all datasets. We also show that our method is robust to the number of shots and the number of ways.

Performance Comparison

BibTeX


      @misc{
      kumar2024trajectoryalignedspacetimetokensfewshot,
      title={Trajectory-aligned Space-time Tokens for Few-shot Action Recognition}, 
      author={Pulkit Kumar and Namitha Padmanabhan and Luke Luo and Sai Saketh Rambhatla and Abhinav Shrivastava},
      year={2024},
      eprint={2407.18249},
      archivePrefix={arXiv},
      primaryClass={cs.CV},
      url={https://arxiv.org/abs/2407.18249}
    }