Leveraging Vision-Language Pre-training for Human Activity Recognition in Still Images
- VLM
Main:5 Pages
6 Figures
Bibliography:1 Pages
7 Tables
Appendix:1 Pages
Abstract
Recognising human activity in a single photo enables indexing, safety and assistive applications, yet lacks motion cues. Using 285 MSCOCO images labelled as walking, running, sitting, and standing, scratch CNNs scored 41% accuracy. Fine-tuning multimodal CLIP raised this to 76%, demonstrating that contrastive vision-language pre-training decisively improves still-image action recognition in real-world deployments.
View on arXivComments on this paper
