ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2406.05184
  4. Cited By
The Unmet Promise of Synthetic Training Images: Using Retrieved Real Images Performs Better

The Unmet Promise of Synthetic Training Images: Using Retrieved Real Images Performs Better

3 January 2025
Scott Geng
Cheng-Yu Hsieh
Vivek Ramanujan
Matthew Wallingford
Chun-Liang Li
Pang Wei Koh
Ranjay Krishna
    DiffM
ArXivPDFHTML

Papers citing "The Unmet Promise of Synthetic Training Images: Using Retrieved Real Images Performs Better"

11 / 11 papers shown
Title
Towards Generating Realistic 3D Semantic Training Data for Autonomous Driving
Towards Generating Realistic 3D Semantic Training Data for Autonomous Driving
Lucas Nunes
Rodrigo Marcuzzi
Jens Behley
C. Stachniss
3DPC
76
0
0
27 Mar 2025
Economics of Sourcing Human Data
Economics of Sourcing Human Data
Sebastin Santy
Prasanta Bhattacharya
Manoel Horta Ribeiro
Kelsey Allen
Sewoong Oh
67
0
0
11 Feb 2025
COBRA: COmBinatorial Retrieval Augmentation for Few-Shot Adaptation
COBRA: COmBinatorial Retrieval Augmentation for Few-Shot Adaptation
Arnav M. Das
Gantavya Bhatt
Lilly Kumari
Sahil Verma
J. Bilmes
29
0
0
23 Dec 2024
No "Zero-Shot" Without Exponential Data: Pretraining Concept Frequency
  Determines Multimodal Model Performance
No "Zero-Shot" Without Exponential Data: Pretraining Concept Frequency Determines Multimodal Model Performance
Vishaal Udandarao
Ameya Prabhu
Adhiraj Ghosh
Yash Sharma
Philip H. S. Torr
Adel Bibi
Samuel Albanie
Matthias Bethge
VLM
109
43
0
04 Apr 2024
SynthCLIP: Are We Ready for a Fully Synthetic CLIP Training?
SynthCLIP: Are We Ready for a Fully Synthetic CLIP Training?
Hasan Hammoud
Hani Itani
Fabio Pizzati
Philip H. S. Torr
Adel Bibi
Bernard Ghanem
CLIP
VLM
110
34
0
02 Feb 2024
Neural Priming for Sample-Efficient Adaptation
Neural Priming for Sample-Efficient Adaptation
Matthew Wallingford
Vivek Ramanujan
Alex Fang
Aditya Kusupati
Roozbeh Mottaghi
Aniruddha Kembhavi
Ludwig Schmidt
Ali Farhadi
VLM
100
13
0
16 Jun 2023
Distilling Step-by-Step! Outperforming Larger Language Models with Less
  Training Data and Smaller Model Sizes
Distilling Step-by-Step! Outperforming Larger Language Models with Less Training Data and Smaller Model Sizes
Lokesh Nagalapatti
Chun-Liang Li
Chih-Kuan Yeh
Hootan Nakhost
Yasuhisa Fujii
Alexander Ratner
Ranjay Krishna
Chen-Yu Lee
Tomas Pfister
ALM
198
283
0
03 May 2023
Tip-Adapter: Training-free CLIP-Adapter for Better Vision-Language
  Modeling
Tip-Adapter: Training-free CLIP-Adapter for Better Vision-Language Modeling
Renrui Zhang
Rongyao Fang
Wei Zhang
Peng Gao
Kunchang Li
Jifeng Dai
Yu Qiao
Hongsheng Li
VLM
172
281
0
06 Nov 2021
Learning to Prompt for Vision-Language Models
Learning to Prompt for Vision-Language Models
Kaiyang Zhou
Jingkang Yang
Chen Change Loy
Ziwei Liu
VPVLM
CLIP
VLM
319
2,108
0
02 Sep 2021
Deduplicating Training Data Makes Language Models Better
Deduplicating Training Data Makes Language Models Better
Katherine Lee
Daphne Ippolito
A. Nystrom
Chiyuan Zhang
Douglas Eck
Chris Callison-Burch
Nicholas Carlini
SyDa
234
447
0
14 Jul 2021
Scaling Laws for Neural Language Models
Scaling Laws for Neural Language Models
Jared Kaplan
Sam McCandlish
T. Henighan
Tom B. Brown
B. Chess
R. Child
Scott Gray
Alec Radford
Jeff Wu
Dario Amodei
220
3,054
0
23 Jan 2020
1