The rapid growth of video-text data presents challenges in storage and computation during training. Online learning, which processes streaming data in real-time, offers a promising solution to these issues while also allowing swift adaptations in scenarios demanding real-time responsiveness. One strategy to enhance the efficiency and effectiveness of learning involves identifying and prioritizing data that enhances performance on target downstream tasks. We propose Relevance and Specificity-based online filtering framework (ReSpec) that selects data based on four criteria: (i) modality alignment for clean data, (ii) task relevance for target focused data, (iii) specificity for informative and detailed data, and (iv) efficiency for low-latency processing. Relevance is determined by the probabilistic alignment of incoming data with downstream tasks, while specificity employs the distance to a root embedding representing the least specific data as an efficient proxy for informativeness. By establishing reference points from target task data, ReSpec filters incoming data in real-time, eliminating the need for extensive storage and compute. Evaluating on large-scale datasets WebVid2M and VideoCC3M, ReSpec attains state-of-the-art performance on five zeroshot video retrieval tasks, using as little as 5% of the data while incurring minimal compute. The source code is available atthis https URL.
View on arXiv@article{kim2025_2504.14875, title={ ReSpec: Relevance and Specificity Grounded Online Filtering for Learning on Video-Text Data Streams }, author={ Chris Dongjoo Kim and Jihwan Moon and Sangwoo Moon and Heeseung Yun and Sihaeng Lee and Aniruddha Kembhavi and Soonyoung Lee and Gunhee Kim and Sangho Lee and Christopher Clark }, journal={arXiv preprint arXiv:2504.14875}, year={ 2025 } }