ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2310.00429
  4. Cited By
On the Stability of Iterative Retraining of Generative Models on their
  own Data

On the Stability of Iterative Retraining of Generative Models on their own Data

30 September 2023
Quentin Bertrand
A. Bose
Alexandre Duplessis
Marco Jiralerspong
Gauthier Gidel
ArXivPDFHTML

Papers citing "On the Stability of Iterative Retraining of Generative Models on their own Data"

35 / 35 papers shown
Title
Self-Consuming Generative Models with Adversarially Curated Data
Self-Consuming Generative Models with Adversarially Curated Data
Xiukun Wei
Xueru Zhang
WIGM
39
0
0
14 May 2025
Information Retrieval in the Age of Generative AI: The RGB Model
Information Retrieval in the Age of Generative AI: The RGB Model
M. Garetto
Alessandro Cornacchia
Franco Galante
Emilio Leonardi
A. Nordio
A. Tarable
134
0
0
29 Apr 2025
Recursive Training Loops in LLMs: How training data properties modulate distribution shift in generated data?
Recursive Training Loops in LLMs: How training data properties modulate distribution shift in generated data?
Grgur Kovač
Jérémy Perez
Rémy Portelas
Peter Ford Dominey
Pierre-Yves Oudeyer
35
0
0
04 Apr 2025
Enhancing Domain-Specific Encoder Models with LLM-Generated Data: How to Leverage Ontologies, and How to Do Without Them
Enhancing Domain-Specific Encoder Models with LLM-Generated Data: How to Leverage Ontologies, and How to Do Without Them
Marc Felix Brinner
Tarek Al Mustafa
Sina Zarrieß
34
0
0
27 Mar 2025
Position: Model Collapse Does Not Mean What You Think
Position: Model Collapse Does Not Mean What You Think
Rylan Schaeffer
Joshua Kazdan
Alvan Caleb Arulandu
Sanmi Koyejo
69
0
0
05 Mar 2025
A Theoretical Perspective: How to Prevent Model Collapse in Self-consuming Training Loops
A Theoretical Perspective: How to Prevent Model Collapse in Self-consuming Training Loops
Shi Fu
Yingjie Wang
Yuzhu Chen
Xinmei Tian
Dacheng Tao
53
1
0
26 Feb 2025
Machine-generated text detection prevents language model collapse
Machine-generated text detection prevents language model collapse
George Drayson
Emine Yilmaz
Vasileios Lampos
DeLMO
62
0
0
21 Feb 2025
Escaping Collapse: The Strength of Weak Data for Large Language Model Training
Escaping Collapse: The Strength of Weak Data for Large Language Model Training
Kareem Amin
Sara Babakniya
Alex Bie
Weiwei Kong
Umar Syed
Sergei Vassilvitskii
70
1
0
13 Feb 2025
Self-Improving Transformers Overcome Easy-to-Hard and Length Generalization Challenges
Self-Improving Transformers Overcome Easy-to-Hard and Length Generalization Challenges
Nayoung Lee
Ziyang Cai
Avi Schwarzschild
Kangwook Lee
Dimitris Papailiopoulos
ReLM
VLM
LRM
AI4CE
83
4
0
03 Feb 2025
Rate of Model Collapse in Recursive Training
Rate of Model Collapse in Recursive Training
A. Suresh
A. Thangaraj
Aditya Nanda Kishore Khandavally
SyDa
27
5
0
23 Dec 2024
Universality of the $π^2/6$ Pathway in Avoiding Model Collapse
Universality of the π2/6π^2/6π2/6 Pathway in Avoiding Model Collapse
Apratim Dey
D. Donoho
58
5
0
30 Oct 2024
Collapse or Thrive? Perils and Promises of Synthetic Data in a Self-Generating World
Collapse or Thrive? Perils and Promises of Synthetic Data in a Self-Generating World
Joshua Kazdan
Rylan Schaeffer
Apratim Dey
Matthias Gerstgrasser
Rafael Rafailov
D. Donoho
Sanmi Koyejo
53
11
0
22 Oct 2024
Strong Model Collapse
Strong Model Collapse
Elvis Dohmatob
Yunzhen Feng
Arjun Subramonian
Julia Kempe
28
9
0
07 Oct 2024
Self-Improving Diffusion Models with Synthetic Data
Self-Improving Diffusion Models with Synthetic Data
Sina Alemohammad
Ahmed Imtiaz Humayun
S. Agarwal
John Collomosse
Richard G. Baraniuk
30
11
0
29 Aug 2024
LLM See, LLM Do: Guiding Data Generation to Target Non-Differentiable
  Objectives
LLM See, LLM Do: Guiding Data Generation to Target Non-Differentiable Objectives
Luísa Shimabucoro
Sebastian Ruder
Julia Kreutzer
Marzieh Fadaee
Sara Hooker
SyDa
36
4
0
01 Jul 2024
A survey on the impact of AI-based recommenders on human behaviours:
  methodologies, outcomes and future directions
A survey on the impact of AI-based recommenders on human behaviours: methodologies, outcomes and future directions
Luca Pappalardo
Emanuele Ferragina
Salvatore Citraro
Giuliano Cornacchia
M. Nanni
...
D. Gambetta
Giovanni Mauro
Virginia Morini
Valentina Pansanella
D. Pedreschi
46
9
0
29 Jun 2024
How Stable is Stable Diffusion under Recursive InPainting (RIP)?
How Stable is Stable Diffusion under Recursive InPainting (RIP)?
Javier Conde
Miguel González
Gonzalo Martínez
Fernando Moral
Elena Merino-Gómez
Pedro Reviriego
DiffM
42
3
0
27 Jun 2024
Understanding Hallucinations in Diffusion Models through Mode
  Interpolation
Understanding Hallucinations in Diffusion Models through Mode Interpolation
Sumukh K. Aithal
Pratyush Maini
Zachary Chase Lipton
J. Zico Kolter
DiffM
40
19
0
13 Jun 2024
Beyond Model Collapse: Scaling Up with Synthesized Data Requires
  Reinforcement
Beyond Model Collapse: Scaling Up with Synthesized Data Requires Reinforcement
Yunzhen Feng
Elvis Dohmatob
Pu Yang
Francois Charton
Julia Kempe
53
17
0
11 Jun 2024
Automating Data Annotation under Strategic Human Agents: Risks and
  Potential Solutions
Automating Data Annotation under Strategic Human Agents: Risks and Potential Solutions
Tian Xie
Xueru Zhang
35
3
0
12 May 2024
Heat Death of Generative Models in Closed-Loop Learning
Heat Death of Generative Models in Closed-Loop Learning
Matteo Marchi
Stefano Soatto
Pratik Chaudhari
Paulo Tabuada
SyDa
VLM
AI4CE
33
14
0
02 Apr 2024
Is Model Collapse Inevitable? Breaking the Curse of Recursion by
  Accumulating Real and Synthetic Data
Is Model Collapse Inevitable? Breaking the Curse of Recursion by Accumulating Real and Synthetic Data
Matthias Gerstgrasser
Rylan Schaeffer
Apratim Dey
Rafael Rafailov
Henry Sleight
...
Andrey Gromov
Daniel A. Roberts
Diyi Yang
D. Donoho
Oluwasanmi Koyejo
26
52
0
01 Apr 2024
Human vs. Generative AI in Content Creation Competition: Symbiosis or
  Conflict?
Human vs. Generative AI in Content Creation Competition: Symbiosis or Conflict?
Fan Yao
Chuanhao Li
Denis Nekipelov
Hongning Wang
Haifeng Xu
43
14
0
23 Feb 2024
Towards Theoretical Understandings of Self-Consuming Generative Models
Towards Theoretical Understandings of Self-Consuming Generative Models
Shi Fu
Sen Zhang
Yingjie Wang
Xinmei Tian
Dacheng Tao
36
9
0
19 Feb 2024
Model Collapse Demystified: The Case of Regression
Model Collapse Demystified: The Case of Regression
Elvis Dohmatob
Yunzhen Feng
Julia Kempe
39
32
0
12 Feb 2024
Beware of Words: Evaluating the Lexical Richness of Conversational Large
  Language Models
Beware of Words: Evaluating the Lexical Richness of Conversational Large Language Models
Gonzalo Martínez
José Alberto Hernández
Javier Conde
Pedro Reviriego
Elena Merino-Gómez
16
7
0
11 Feb 2024
Self-Correcting Self-Consuming Loops for Generative Model Training
Self-Correcting Self-Consuming Loops for Generative Model Training
Nate Gillman
Michael Freeman
Daksh Aggarwal
Chia-Hong Hsu
Calvin Luo
Yonglong Tian
Chen Sun
33
13
0
11 Feb 2024
A Tale of Tails: Model Collapse as a Change of Scaling Laws
A Tale of Tails: Model Collapse as a Change of Scaling Laws
Elvis Dohmatob
Yunzhen Feng
Pu Yang
Francois Charton
Julia Kempe
29
64
0
10 Feb 2024
Iterated Denoising Energy Matching for Sampling from Boltzmann Densities
Iterated Denoising Energy Matching for Sampling from Boltzmann Densities
Tara Akhound-Sadegh
Jarrid Rector-Brooks
A. Bose
Sarthak Mittal
Pablo Lemos
...
Siamak Ravanbakhsh
Gauthier Gidel
Yoshua Bengio
Nikolay Malkin
Alexander Tong
DiffM
40
41
0
09 Feb 2024
Large Language Models Suffer From Their Own Output: An Analysis of the
  Self-Consuming Training Loop
Large Language Models Suffer From Their Own Output: An Analysis of the Self-Consuming Training Loop
Martin Briesch
Dominik Sobania
Franz Rothlauf
35
55
0
28 Nov 2023
Sparks of Artificial General Intelligence: Early experiments with GPT-4
Sparks of Artificial General Intelligence: Early experiments with GPT-4
Sébastien Bubeck
Varun Chandrasekaran
Ronen Eldan
J. Gehrke
Eric Horvitz
...
Scott M. Lundberg
Harsha Nori
Hamid Palangi
Marco Tulio Ribeiro
Yi Zhang
ELM
AI4MH
AI4CE
ALM
298
2,232
0
22 Mar 2023
Diffusion Models are Minimax Optimal Distribution Estimators
Diffusion Models are Minimax Optimal Distribution Estimators
Kazusato Oko
Shunta Akiyama
Taiji Suzuki
DiffM
72
85
0
03 Mar 2023
Zero-Shot Text-to-Image Generation
Zero-Shot Text-to-Image Generation
Aditya A. Ramesh
Mikhail Pavlov
Gabriel Goh
Scott Gray
Chelsea Voss
Alec Radford
Mark Chen
Ilya Sutskever
VLM
255
4,781
0
24 Feb 2021
Scaling Laws for Neural Language Models
Scaling Laws for Neural Language Models
Jared Kaplan
Sam McCandlish
T. Henighan
Tom B. Brown
B. Chess
R. Child
Scott Gray
Alec Radford
Jeff Wu
Dario Amodei
258
4,489
0
23 Jan 2020
A Style-Based Generator Architecture for Generative Adversarial Networks
A Style-Based Generator Architecture for Generative Adversarial Networks
Tero Karras
S. Laine
Timo Aila
285
10,354
0
12 Dec 2018
1