ResearchTrend.AI
  • Communities
  • Connect sessions
  • AI calendar
  • Organizations
  • Join Slack
  • Contact Sales
Papers
Communities
Social Events
Terms and Conditions
Pricing
Contact Sales
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2026 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2408.07482
  4. Cited By
Training Overhead Ratio: A Practical Reliability Metric for Large
  Language Model Training Systems
v1v2 (latest)

Training Overhead Ratio: A Practical Reliability Metric for Large Language Model Training Systems

14 August 2024
Ning Lu
Qian Xie
Hao Zhang
Wenyi Fang
Yang Zheng
Zheng Hu
Jiantao Ma
ArXiv (abs)PDFHTMLGithub

Papers citing "Training Overhead Ratio: A Practical Reliability Metric for Large Language Model Training Systems"

2 / 2 papers shown
Safe Delta: Consistently Preserving Safety when Fine-Tuning LLMs on Diverse Datasets
Safe Delta: Consistently Preserving Safety when Fine-Tuning LLMs on Diverse Datasets
Ning Lu
Shengcai Liu
Jiahao Wu
Weiyu Chen
Zhirui Zhang
Yew-Soon Ong
Qi Wang
Ke Tang
371
17
0
17 May 2025
Hardware-Aware DNN Compression for Homogeneous Edge Devices
Hardware-Aware DNN Compression for Homogeneous Edge Devices
Kunlong Zhang
Guiying Li
Ning Lu
Peng Yang
Shengcai Liu
338
2
0
25 Jan 2025
1
Page 1 of 1