ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2310.10543
  4. Cited By
ViPE: Visualise Pretty-much Everything

ViPE: Visualise Pretty-much Everything

16 October 2023
Hassan Shahmohammadi
Adhiraj Ghosh
Hendrik P. A. Lensch
    DiffM
ArXivPDFHTML

Papers citing "ViPE: Visualise Pretty-much Everything"

5 / 5 papers shown
Title
No "Zero-Shot" Without Exponential Data: Pretraining Concept Frequency
  Determines Multimodal Model Performance
No "Zero-Shot" Without Exponential Data: Pretraining Concept Frequency Determines Multimodal Model Performance
Vishaal Udandarao
Ameya Prabhu
Adhiraj Ghosh
Yash Sharma
Philip H. S. Torr
Adel Bibi
Samuel Albanie
Matthias Bethge
VLM
109
43
0
04 Apr 2024
Text-Only Training for Image Captioning using Noise-Injected CLIP
Text-Only Training for Image Captioning using Noise-Injected CLIP
David Nukrai
Ron Mokady
Amir Globerson
VLM
CLIP
41
69
0
01 Nov 2022
FLUTE: Figurative Language Understanding through Textual Explanations
FLUTE: Figurative Language Understanding through Textual Explanations
Tuhin Chakrabarty
Arkadiy Saakyan
Debanjan Ghosh
Smaranda Muresan
42
65
0
24 May 2022
BLIP: Bootstrapping Language-Image Pre-training for Unified
  Vision-Language Understanding and Generation
BLIP: Bootstrapping Language-Image Pre-training for Unified Vision-Language Understanding and Generation
Junnan Li
Dongxu Li
Caiming Xiong
S. Hoi
MLLM
BDL
VLM
CLIP
382
4,010
0
28 Jan 2022
Towards Zero-Label Language Learning
Towards Zero-Label Language Learning
Zirui Wang
Adams Wei Yu
Orhan Firat
Yuan Cao
SyDa
167
101
0
19 Sep 2021
1