45
0

ExAct: A Video-Language Benchmark for Expert Action Analysis

Main:9 Pages
20 Figures
Bibliography:4 Pages
3 Tables
Appendix:10 Pages
Abstract

We present ExAct, a new video-language benchmark for expert-level understanding of skilled physical human activities. Our new benchmark contains 3521 expert-curated video question-answer pairs spanning 11 physical activities in 6 domains: Sports, Bike Repair, Cooking, Health, Music, and Dance. ExAct requires the correct answer to be selected from five carefully designed candidate options, thus necessitating a nuanced, fine-grained, expert-level understanding of physical human skills. Evaluating the recent state-of-the-art VLMs on ExAct reveals a substantial performance gap relative to human expert performance. Specifically, the best-performing GPT-4o model achieves only 44.70% accuracy, well below the 82.02% attained by trained human specialists/experts. We believe that ExAct will be beneficial for developing and evaluating VLMs capable of precise understanding of human skills in various physical and procedural domains. Dataset and code are available atthis https URL

View on arXiv
@article{yi2025_2506.06277,
  title={ ExAct: A Video-Language Benchmark for Expert Action Analysis },
  author={ Han Yi and Yulu Pan and Feihong He and Xinyu Liu and Benjamin Zhang and Oluwatumininu Oguntola and Gedas Bertasius },
  journal={arXiv preprint arXiv:2506.06277},
  year={ 2025 }
}
Comments on this paper