ResearchTrend.AI
  • Communities
  • Connect sessions
  • AI calendar
  • Organizations
  • Join Slack
  • Contact Sales
Papers
Communities
Social Events
Terms and Conditions
Pricing
Contact Sales
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2026 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2505.19893
  4. Cited By
ESLM: Risk-Averse Selective Language Modeling for Efficient Pretraining

ESLM: Risk-Averse Selective Language Modeling for Efficient Pretraining

26 May 2025
Melis Ilayda Bal
Volkan Cevher
Michael Muehlebach
ArXiv (abs)PDFHTML

Papers citing "ESLM: Risk-Averse Selective Language Modeling for Efficient Pretraining"

3 / 3 papers shown
TiTok: Transfer Token-level Knowledge via Contrastive Excess to Transplant LoRA
TiTok: Transfer Token-level Knowledge via Contrastive Excess to Transplant LoRA
Chanjoo Jung
Jaehyung Kim
164
0
0
06 Oct 2025
LLMs on the Line: Data Determines Loss-to-Loss Scaling Laws
LLMs on the Line: Data Determines Loss-to-Loss Scaling Laws
Prasanna Mayilvahanan
Thaddäus Wiedemer
Sayak Mallick
Matthias Bethge
Wieland Brendel
172
3
0
17 Feb 2025
Dynamic Loss-Based Sample Reweighting for Improved Large Language Model Pretraining
Dynamic Loss-Based Sample Reweighting for Improved Large Language Model PretrainingInternational Conference on Learning Representations (ICLR), 2025
Daouda Sow
Herbert Woisetschläger
Saikiran Bulusu
Shiqiang Wang
Hans-Arno Jacobsen
Yingbin Liang
377
13
0
10 Feb 2025
1
Page 1 of 1