84

SciVid: Cross-Domain Evaluation of Video Models in Scientific Applications

Yana Hasson
Pauline Luc
Liliane Momeni
Maks Ovsjanikov
Guillaume Le Moing
Alina Kuznetsova
Ira Ktena
Jennifer J. Sun
Skanda Koppula
Dilara Gokay
Joseph Heyward
Etienne Pot
Andrew Zisserman
Main:7 Pages
25 Figures
Bibliography:5 Pages
14 Tables
Appendix:14 Pages
Abstract

In recent years, there has been a proliferation of spatiotemporal foundation models in different scientific disciplines. While promising, these models are often domain-specific and are only assessed within the particular applications for which they are designed. Given that many tasks can be represented as video modeling problems, video foundation models (ViFMs) hold considerable promise as general-purpose domain-agnostic approaches. However, it is not known whether the knowledge acquired on large-scale but potentially out-of-domain data can be effectively transferred across diverse scientific disciplines, and if a single, pretrained ViFM can be competitive with domain-specific baselines. To address this, we introduce SciVid, a comprehensive benchmark comprising five *Sci*entific *Vid*eo tasks, across medical computer vision, animal behavior, and weather forecasting. We adapt six leading ViFMs to SciVid using simple trainable readout modules, establishing strong baselines and demonstrating the potential for effective transfer learning. Specifically, we show that state-of-the-art results can be obtained in several applications by leveraging the general-purpose representations from ViFM backbones. Furthermore, our results reveal the limitations of existing ViFMs, and highlight opportunities for the development of generalizable models for high-impact scientific applications. We release our code atthis https URLto facilitate further research in the development of ViFMs.

View on arXiv
Comments on this paper