ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2503.01297
  4. Cited By

Regularization-based Framework for Quantization-, Fault- and Variability-Aware Training

3 March 2025
Anmol Biswas
Raghav Singhal
Sivakumar Elangovan
Shreyas Sabnis
U. Ganguly
    MQ
ArXivPDFHTML

Papers citing "Regularization-based Framework for Quantization-, Fault- and Variability-Aware Training"

1 / 1 papers shown
Title
Scaling Up On-Device LLMs via Active-Weight Swapping Between DRAM and Flash
Scaling Up On-Device LLMs via Active-Weight Swapping Between DRAM and Flash
Fucheng Jia
Zewen Wu
Shiqi Jiang
Huiqiang Jiang
Qianxi Zhang
Y. Yang
Yunxin Liu
Ju Ren
Deyu Zhang
Ting Cao
49
0
0
11 Apr 2025
1