ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2404.01207
  4. Cited By
Vision-language models for decoding provider attention during neonatal
  resuscitation

Vision-language models for decoding provider attention during neonatal resuscitation

1 April 2024
Felipe Parodi
Jordan K Matelsky
Alejandra Regla-Vargas
Elizabeth E. Foglia
Charis Lim
Danielle Weinberg
Konrad Kording
Heidi Herrick
Michael L Platt
ArXivPDFHTML

Papers citing "Vision-language models for decoding provider attention during neonatal resuscitation"

3 / 3 papers shown
Title
GazeSAM: What You See is What You Segment
GazeSAM: What You See is What You Segment
Bin Wang
Armstrong Aboah
Zheyu Zhang
Ulas Bagci
73
21
0
26 Apr 2023
Tip-Adapter: Training-free CLIP-Adapter for Better Vision-Language
  Modeling
Tip-Adapter: Training-free CLIP-Adapter for Better Vision-Language Modeling
Renrui Zhang
Rongyao Fang
Wei Zhang
Peng Gao
Kunchang Li
Jifeng Dai
Yu Qiao
Hongsheng Li
VLM
192
385
0
06 Nov 2021
MobileViT: Light-weight, General-purpose, and Mobile-friendly Vision
  Transformer
MobileViT: Light-weight, General-purpose, and Mobile-friendly Vision Transformer
Sachin Mehta
Mohammad Rastegari
ViT
218
1,213
0
05 Oct 2021
1