ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2410.12954
  4. Cited By
A Note on Shumailov et al. (2024): `AI Models Collapse When Trained on
  Recursively Generated Data'

A Note on Shumailov et al. (2024): `AI Models Collapse When Trained on Recursively Generated Data'

16 October 2024
Ali Borji
ArXivPDFHTML

Papers citing "A Note on Shumailov et al. (2024): `AI Models Collapse When Trained on Recursively Generated Data'"

9 / 9 papers shown
Title
Monitoring morphometric drift in lifelong learning segmentation of the spinal cord
Monitoring morphometric drift in lifelong learning segmentation of the spinal cord
E. Karthik
Sandrine Bédard
J. Valošek
Christoph S. Aigner
E. Bannier
...
Zachary Vavasour
Dimitri Van De Ville
Kenneth A. Weber II
Sarath Chandar
Julien Cohen-Adad
15
0
0
02 May 2025
LLM-Evaluation Tropes: Perspectives on the Validity of LLM-Evaluations
LLM-Evaluation Tropes: Perspectives on the Validity of LLM-Evaluations
Laura Dietz
Oleg Zendel
P. Bailey
Charles L. A. Clarke
Ellese Cotterill
Jeff Dalton
Faegheh Hasibi
Mark Sanderson
Nick Craswell
ELM
43
0
0
27 Apr 2025
MultiConIR: Towards multi-condition Information Retrieval
Xuan Lu
Sifan Liu
Bochao Yin
Y. K. Li
Xinghao Chen
Hui Su
Yaohui Jin
Wenjun Zeng
Xiaoyu Shen
66
0
0
13 Mar 2025
Predicting Practically? Domain Generalization for Predictive Analytics in Real-world Environments
Hanyu Duan
Yi Yang
Ahmed Abbasi
K. Tam
OOD
76
0
0
05 Mar 2025
Economics of Sourcing Human Data
Economics of Sourcing Human Data
Sebastin Santy
Prasanta Bhattacharya
Manoel Horta Ribeiro
Kelsey Allen
Sewoong Oh
69
0
0
11 Feb 2025
Does Training on Synthetic Data Make Models Less Robust?
Does Training on Synthetic Data Make Models Less Robust?
Lingze Zhang
Ellie Pavlick
SyDa
84
0
0
11 Feb 2025
Hands-On Tutorial: Labeling with LLM and Human-in-the-Loop
Hands-On Tutorial: Labeling with LLM and Human-in-the-Loop
Ekaterina Artemova
Akim Tsvigun
Dominik Schlechtweg
Natalia Fedorova
Konstantin Chernyshev
Sergei Tilga
Boris Obmoroshev
SyDa
VLM
80
0
0
28 Jan 2025
Can AI writing be salvaged? Mitigating Idiosyncrasies and Improving Human-AI Alignment in the Writing Process through Edits
Can AI writing be salvaged? Mitigating Idiosyncrasies and Improving Human-AI Alignment in the Writing Process through Edits
Tuhin Chakrabarty
Philippe Laban
C. Wu
45
8
0
22 Sep 2024
Take the essence and discard the dross: A Rethinking on Data Selection for Fine-Tuning Large Language Models
Take the essence and discard the dross: A Rethinking on Data Selection for Fine-Tuning Large Language Models
Ziche Liu
Rui Ke
Feng Jiang
Feng Jiang
Haizhou Li
61
1
0
20 Jun 2024
1