Vamos: Versatile Action Models for Video Understanding

Brown University      Honda Research Institute

Abstract

What makes good representations for video understanding, such as anticipating future activities, or answering video-conditioned questions? While earlier approaches focus on end-to-end learning directly from video pixels, we propose to revisit text-based representations, such as general-purpose video captions, which are interpretable and can be directly consumed by large language models (LLMs). Intuitively, different video understanding tasks may require representations that are complementary and at different granularity. To this end, we propose versatile action models (Vamos), a learning framework powered by a large language model as the ``reasoner'', and can flexibly leverage visual embedding and free-form text descriptions as its input. To interpret the important text evidence for question answering, we generalize the concept bottleneck model to work with tokens and nonlinear models, which uses hard attention to select a small subset of tokens from the free-form text as inputs to the LLM reasoner. We evaluate Vamos on four complementary video understanding benchmarks, Ego4D, NeXT-QA, IntentQA, and EgoSchema, on its capability to model temporal dynamics, encode visual history, and perform reasoning. Surprisingly, we observe that text-based representations consistently achieve competitive performance on all benchmarks, and that visual embeddings provide marginal or no performance improvement, demonstrating the effectiveness of text-based video representation in the LLM era. We also demonstrate that our token bottleneck model is able to select relevant evidence from free-form text, support test-time intervention, and achieves nearly 5 times inference speedup while keeping a competitive question answering performance.

Vamos: Versatile Action Models

We introduce Vamos, a simple yet effective framework to utilize LLMs to unify video dynamic modeling tasks, including comprehending historical content (video question answering, VQA) and future prediction (long-term action anticipation, LTA). Vamos flexibly unifies distributed visual features and textual video representations including discrete action labels and free-form video captions.

Vamos Model

Token Bottleneck Model

We propose the token bottleneck model (TBM). Inspired by the concept bottleneck models (CBM), which achieve interpretable object classification by inspecting the weights of the learned linear classifier (left). Unlike CBM, Vamos does not require pre-defining a list of concepts. It directly works with tokenized text inputs. To provide input tokens to the reasoning model (an LLM), we leverage hard attention to generate binary rather than continuous weights (middle). The token bottleneck can be interpreted directly. It can also be intervened with human inputs (right), or augmented with residual visual information.

TBM

Visualization of Vamos Prediction and Manual Intervention

Benchmark Results

We compare Vamos with other state-of-the-art models on the four benchmarks: EgoSchema, NeXT-QA, IntentQA, and Ego4D LTA. Vamos achieves SoTA performance on all the four datasets.

Vamos selector

Acknowledgements

This work is supported by Honda Research Institute USA and Samsung Advanced Institute of Technology. We would like to thank Karttikeya Mangalam, Raiymbek Akshulakov, and Shyamal Buch for their kind help with EgoSchema and ATP; Apoorv Khandelwal, Calvin Luo, David Isele, Songpo Li, and Tian Yun for their useful feedback and discussions. Our research was conducted using computational resources at the Center for Computation and Visualization at Brown University.

BibTeX

@misc{wang2023vamos,
        title={Vamos: Versatile Action Models for Video Understanding}, 
        author={Shijie Wang and Qi Zhao and Minh Quan Do and Nakul Agarwal and Kwonjoon Lee and Chen Sun},
        year={2023},
        eprint={2311.13627},
        archivePrefix={arXiv},
        primaryClass={cs.CV}
  }