ResearchTrend.AI
  • Communities
  • Connect sessions
  • AI calendar
  • Organizations
  • Join Slack
  • Contact Sales
Papers
Communities
Social Events
Terms and Conditions
Pricing
Contact Sales
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2026 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2307.01850
  4. Cited By
Self-Consuming Generative Models Go MAD

Self-Consuming Generative Models Go MAD

International Conference on Learning Representations (ICLR), 2023
4 July 2023
Sina Alemohammad
Josue Casco-Rodriguez
Lorenzo Luzi
Ahmed Imtiaz Humayun
Hossein Babaei
Daniel LeJeune
Ali Siahkoohi
Richard G. Baraniuk
    WIGM
ArXiv (abs)PDFHTMLHuggingFace (1 upvotes)

Papers citing "Self-Consuming Generative Models Go MAD"

50 / 122 papers shown
Flow Straighter and Faster: Efficient One-Step Generative Modeling via MeanFlow on Rectified Trajectories
Flow Straighter and Faster: Efficient One-Step Generative Modeling via MeanFlow on Rectified Trajectories
Xinxi Zhang
Shiwei Tan
Quang Nguyen
Quan Dao
Ligong Han
Xiaoxiao He
Tunyu Zhang
Alen Mrdovic
Dimitris N. Metaxas
255
1
0
28 Nov 2025
Stabilizing Self-Consuming Diffusion Models with Latent Space Filtering
Stabilizing Self-Consuming Diffusion Models with Latent Space Filtering
Zhongteng Cai
Y. Wang
Yang Liu
Xueru Zhang
159
1
0
16 Nov 2025
SynQuE: Estimating Synthetic Dataset Quality Without Annotations
SynQuE: Estimating Synthetic Dataset Quality Without Annotations
Arthur Chen
Victor Zhong
296
0
0
06 Nov 2025
Forgetting is Everywhere
Forgetting is Everywhere
Ben Sanati
Thomas L. Lee
Trevor A. McInroe
Aidan Scannell
Nikolay Malkin
David Abel
Amos Storkey
OODCML
401
0
0
06 Nov 2025
Why Less is More (Sometimes): A Theory of Data Curation
Why Less is More (Sometimes): A Theory of Data Curation
Elvis Dohmatob
Mohammad Pezeshki
Reyhane Askari Hemmat
157
1
0
05 Nov 2025
Synthetic Eggs in Many Baskets: The Impact of Synthetic Data Diversity on LLM Fine-Tuning
Synthetic Eggs in Many Baskets: The Impact of Synthetic Data Diversity on LLM Fine-Tuning
Max Schaffelder
Albert Gatt
SyDa
149
1
0
03 Nov 2025
Fine-tuning Flow Matching Generative Models with Intermediate Feedback
Fine-tuning Flow Matching Generative Models with Intermediate Feedback
Jiajun Fan
Chaoran Cheng
Shuaike Shen
Xiangxin Zhou
Ge Liu
EGVM
161
1
0
20 Oct 2025
Escaping Model Collapse via Synthetic Data Verification: Near-term Improvements and Long-term Convergence
Escaping Model Collapse via Synthetic Data Verification: Near-term Improvements and Long-term Convergence
Bingji Yi
Qiyuan Liu
Yuwei Cheng
Haifeng Xu
SyDa
201
0
0
18 Oct 2025
Beyond Real Data: Synthetic Data through the Lens of Regularization
Beyond Real Data: Synthetic Data through the Lens of Regularization
Amitis Shidani
Tyler Farghly
Yang Sun
Habib Ganjgahi
George Deligiannidis
219
0
0
09 Oct 2025
Neon: Negative Extrapolation From Self-Training Improves Image Generation
Neon: Negative Extrapolation From Self-Training Improves Image Generation
Sina Alemohammad
Zinan Lin
Richard G. Baraniuk
SyDa
302
1
0
04 Oct 2025
Deep Generative Continual Learning using Functional LoRA: FunLoRA
Deep Generative Continual Learning using Functional LoRA: FunLoRA
Victor Enescu
Hichem Sahbi
237
0
0
03 Oct 2025
Characterizing Model Behavior Under Synthetic Data Training: An Empirical Study Across Scales and Mixing Ratios
Characterizing Model Behavior Under Synthetic Data Training: An Empirical Study Across Scales and Mixing Ratios
Y. Du
G. Wu
G. Tang
W. Wang
Q. Fan
SyDa
152
0
0
01 Oct 2025
Learning in an Echo Chamber: Online Learning with Replay Adversary
Learning in an Echo Chamber: Online Learning with Replay Adversary
Daniil Dmitriev
Harald Eskelund Franck
Carolin Heinzler
Amartya Sanyal
88
0
0
29 Sep 2025
Preventing Model Collapse Under Overparametrization: Optimal Mixing Ratios for Interpolation Learning and Ridge Regression
Preventing Model Collapse Under Overparametrization: Optimal Mixing Ratios for Interpolation Learning and Ridge Regression
Anvit Garg
Sohom Bhattacharya
Pragya Sur
103
1
0
26 Sep 2025
Training-Free Synthetic Data Generation with Dual IP-Adapter Guidance
Training-Free Synthetic Data Generation with Dual IP-Adapter Guidance
Luc Boudier
Loris Manganelli
Eleftherios Tsonis
Nicolas Dufour
Vicky Kalogeiton
DiffMVLM
110
1
0
26 Sep 2025
Review of Hallucination Understanding in Large Language and Vision Models
Review of Hallucination Understanding in Large Language and Vision Models
Zhengyi Ho
Siyuan Liang
D. Tao
VLMLRM
141
0
0
26 Sep 2025
A Closer Look at Model Collapse: From a Generalization-to-Memorization Perspective
A Closer Look at Model Collapse: From a Generalization-to-Memorization Perspective
Lianghe Shi
Meng Wu
Huijie Zhang
Zekai Zhang
Molei Tao
Qing Qu
261
3
0
20 Sep 2025
ForTIFAI: Fending Off Recursive Training Induced Failure for AI Model Collapse
ForTIFAI: Fending Off Recursive Training Induced Failure for AI Model Collapse
Soheil Zibakhsh Shabgahi
Pedram Aghazadeh
Azalia Mirhoseini
F. Koushanfar
273
0
0
10 Sep 2025
Benchmarking the Detection of LLMs-Generated Modern Chinese Poetry
Benchmarking the Detection of LLMs-Generated Modern Chinese Poetry
Shanshan Wang
Junchao Wu
Fengying Ye
Jingming Yao
Lidia S. Chao
Yang Li
DeLMO
130
1
0
01 Sep 2025
Non-Verbal Vocalisations and their Challenges: Emotion, Privacy, Sparseness, and Real Life
Non-Verbal Vocalisations and their Challenges: Emotion, Privacy, Sparseness, and Real Life
A. Batliner
Shahin Amiriparian
B. Schuller
SLR
174
0
0
03 Aug 2025
Model Misalignment and Language Change: Traces of AI-Associated Language in Unscripted Spoken English
Model Misalignment and Language Change: Traces of AI-Associated Language in Unscripted Spoken English
Bryce Anderson
Riley Galpin
Tom S. Juzek
232
2
0
01 Aug 2025
Flow Matching Policy Gradients
Flow Matching Policy Gradients
David McAllister
Songwei Ge
Brent Yi
Chung Min Kim
Ethan Weber
Hongsuk Choi
Haiwen Feng
Angjoo Kanazawa
263
13
0
28 Jul 2025
Model Collapse Is Not a Bug but a Feature in Machine Unlearning for LLMs
Model Collapse Is Not a Bug but a Feature in Machine Unlearning for LLMs
Yan Scholten
Sophie Xhonneux
Leo Schwinn
Stephan Günnemann
MU
284
2
0
06 Jul 2025
BitMark: Watermarking Bitwise Autoregressive Image Generative Models
BitMark: Watermarking Bitwise Autoregressive Image Generative Models
Louis Kerner
Michel Meintz
Bihe Zhao
Franziska Boenisch
Adam Dziedzic
WIGM
473
1
0
26 Jun 2025
A theoretical basis for model collapse in recursive training
A theoretical basis for model collapse in recursive training
Vivek Shripad Borkar
GAN
354
0
0
11 Jun 2025
Ambient Diffusion Omni: Training Good Models with Bad Data
Ambient Diffusion Omni: Training Good Models with Bad Data
Giannis Daras
Adrian Rodriguez-Munoz
Adam R. Klivans
Antonio Torralba
Constantinos Daskalakis
273
9
0
10 Jun 2025
On Inverse Problems, Parameter Estimation, and Domain Generalization
On Inverse Problems, Parameter Estimation, and Domain Generalization
Deborah Pereg
232
0
0
06 Jun 2025
What happens when generative AI models train recursively on each others' outputs?
What happens when generative AI models train recursively on each others' outputs?
Hung Ahn Vu
Galen Reeves
Emily Wenger
370
0
0
27 May 2025
Can Large Reasoning Models Self-Train?
Can Large Reasoning Models Self-Train?
Sheikh Shafayat
Fahim Tajwar
Ruslan Salakhutdinov
J. Schneider
Andrea Zanette
ReLMOffRLLRM
413
21
0
27 May 2025
LLM Web Dynamics: Tracing Model Collapse in a Network of LLMs
LLM Web Dynamics: Tracing Model Collapse in a Network of LLMs
Tianyu Wang
Lingyou Pang
Akira Horiguchi
Carey E. Priebe
316
5
0
26 May 2025
When Models Don't Collapse: On the Consistency of Iterative MLE
When Models Don't Collapse: On the Consistency of Iterative MLE
Daniel Barzilai
Ohad Shamir
SyDa
186
3
0
25 May 2025
Self-Consuming Generative Models with Adversarially Curated Data
Self-Consuming Generative Models with Adversarially Curated Data
Xiukun Wei
Xueru Zhang
WIGM
294
5
0
14 May 2025
Multi-modal Synthetic Data Training and Model Collapse: Insights from VLMs and Diffusion Models
Multi-modal Synthetic Data Training and Model Collapse: Insights from VLMs and Diffusion Models
Zizhao Hu
Mohammad Rostami
Jesse Thomason
VLM
266
2
0
10 May 2025
Information Retrieval in the Age of Generative AI: The RGB Model
Information Retrieval in the Age of Generative AI: The RGB ModelAnnual International ACM SIGIR Conference on Research and Development in Information Retrieval (SIGIR), 2025
M. Garetto
Alessandro Cornacchia
Franco Galante
Emilio Leonardi
A. Nordio
A. Tarable
857
0
0
29 Apr 2025
AGATE: Stealthy Black-box Watermarking for Multimodal Model Copyright Protection
AGATE: Stealthy Black-box Watermarking for Multimodal Model Copyright Protection
Jianbo Gao
Keke Gai
Jing Yu
Liehuang Zhu
Qi Wu
AAML
263
1
0
28 Apr 2025
Delving into: the quantification of Ai-generated content on the internet (synthetic data)
Delving into: the quantification of Ai-generated content on the internet (synthetic data)
Dirk HR Spennemann
DeLMO
220
6
0
29 Mar 2025
Synthetic Data is an Elegant GIFT for Continual Vision-Language ModelsComputer Vision and Pattern Recognition (CVPR), 2025
Bin Wu
Wuxuan Shi
Jinqiao Wang
Mang Ye
CLLVLM
253
0
0
06 Mar 2025
Compositional World Knowledge leads to High Utility Synthetic data
Compositional World Knowledge leads to High Utility Synthetic data
Sachit Gaudi
Gautam Sreekumar
Vishnu Boddeti
297
0
0
06 Mar 2025
Position: Model Collapse Does Not Mean What You Think
Position: Model Collapse Does Not Mean What You Think
Rylan Schaeffer
Joshua Kazdan
Alvan Caleb Arulandu
Sanmi Koyejo
742
20
0
05 Mar 2025
A Theoretical Perspective: How to Prevent Model Collapse in Self-consuming Training Loops
A Theoretical Perspective: How to Prevent Model Collapse in Self-consuming Training LoopsInternational Conference on Learning Representations (ICLR), 2025
Shi Fu
Yingjie Wang
Yuzhu Chen
Xinmei Tian
Dacheng Tao
373
8
0
26 Feb 2025
FRIDA to the Rescue! Analyzing Synthetic Data Effectiveness in Object-Based Common Sense Reasoning for Disaster Response
FRIDA to the Rescue! Analyzing Synthetic Data Effectiveness in Object-Based Common Sense Reasoning for Disaster Response
Mollie Shichman
C. Bonial
Austin Blodgett
Taylor Hudson
Francis Ferraro
Rachel Rudinger
SyDa
561
0
0
25 Feb 2025
Mitigating Tail Narrowing in LLM Self-Improvement via Socratic-Guided Sampling
Mitigating Tail Narrowing in LLM Self-Improvement via Socratic-Guided SamplingNorth American Chapter of the Association for Computational Linguistics (NAACL), 2024
Yiwen Ding
Zhiheng Xi
Wei He
Zhuoyuan Li
Yitao Zhai
Xiaowei Shi
Xunliang Cai
Tao Gui
Tao Gui
Qi Zhang
LRM
369
12
0
24 Feb 2025
Machine-generated text detection prevents language model collapse
Machine-generated text detection prevents language model collapse
George Drayson
Emine Yilmaz
Vasileios Lampos
DeLMO
739
7
0
21 Feb 2025
Preference Optimization for Reasoning with Pseudo Feedback
Preference Optimization for Reasoning with Pseudo FeedbackInternational Conference on Learning Representations (ICLR), 2024
Fangkai Jiao
Geyang Guo
Xingxing Zhang
Nancy F. Chen
Shafiq Joty
Furu Wei
LRM
455
33
0
17 Feb 2025
Escaping Collapse: The Strength of Weak Data for Large Language Model Training
Escaping Collapse: The Strength of Weak Data for Large Language Model Training
Kareem Amin
Sara Babakniya
Alex Bie
Weiwei Kong
Umar Syed
Sergei Vassilvitskii
403
5
0
13 Feb 2025
Does Training on Synthetic Data Make Models Less Robust?
Does Training on Synthetic Data Make Models Less Robust?
Lingze Zhang
Ellie Pavlick
SyDa
417
2
0
11 Feb 2025
The Best Instruction-Tuning Data are Those That Fit
The Best Instruction-Tuning Data are Those That Fit
Dylan Zhang
Qirun Dai
Hao Peng
ALM
564
21
0
06 Feb 2025
FairT2I: Mitigating Social Bias in Text-to-Image Generation via Large Language Model-Assisted Detection and Attribute Rebalancing
FairT2I: Mitigating Social Bias in Text-to-Image Generation via Large Language Model-Assisted Detection and Attribute Rebalancing
Jinya Sakurai
Issei Sato
Issei Sato
509
3
0
06 Feb 2025
Self-Improving Transformers Overcome Easy-to-Hard and Length Generalization Challenges
Self-Improving Transformers Overcome Easy-to-Hard and Length Generalization Challenges
Nayoung Lee
Ziyang Cai
Avi Schwarzschild
Kangwook Lee
Dimitris Papailiopoulos
ReLMVLMLRMAI4CE
418
19
0
03 Feb 2025
Spend Wisely: Maximizing Post-Training Gains in Iterative Synthetic Data Bootstrapping
Spend Wisely: Maximizing Post-Training Gains in Iterative Synthetic Data Bootstrapping
Pu Yang
Yunzhen Feng
Ziyuan Chen
Yuhang Wu
Zhuoyuan Li
DiffM
364
1
0
31 Jan 2025
123
Next