ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2407.01031
  4. Cited By
PocketLLM: Enabling On-Device Fine-Tuning for Personalized LLMs

PocketLLM: Enabling On-Device Fine-Tuning for Personalized LLMs

1 July 2024
Dan Peng
Zhihui Fu
Jun Wang
ArXivPDFHTML

Papers citing "PocketLLM: Enabling On-Device Fine-Tuning for Personalized LLMs"

4 / 4 papers shown
Title
Scalable Back-Propagation-Free Training of Optical Physics-Informed Neural Networks
Scalable Back-Propagation-Free Training of Optical Physics-Informed Neural Networks
Yequan Zhao
Xinling Yu
Xian Xiao
Z. Chen
Z. Liu
G. Kurczveil
R. Beausoleil
S. Liu
Z. Zhang
49
0
0
17 Feb 2025
On-Device Collaborative Language Modeling via a Mixture of Generalists and Specialists
On-Device Collaborative Language Modeling via a Mixture of Generalists and Specialists
Dongyang Fan
Bettina Messmer
N. Doikov
Martin Jaggi
MoMe
MoE
42
1
0
20 Sep 2024
Smart at what cost? Characterising Mobile Deep Neural Networks in the
  wild
Smart at what cost? Characterising Mobile Deep Neural Networks in the wild
Mario Almeida
Stefanos Laskaridis
Abhinav Mehrotra
L. Dudziak
Ilias Leontiadis
Nicholas D. Lane
HAI
95
44
0
28 Sep 2021
ZeRO-Offload: Democratizing Billion-Scale Model Training
ZeRO-Offload: Democratizing Billion-Scale Model Training
Jie Ren
Samyam Rajbhandari
Reza Yazdani Aminabadi
Olatunji Ruwase
Shuangyang Yang
Minjia Zhang
Dong Li
Yuxiong He
MoE
160
413
0
18 Jan 2021
1