ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2208.12268
  4. Cited By
FedPrompt: Communication-Efficient and Privacy Preserving Prompt Tuning
  in Federated Learning

FedPrompt: Communication-Efficient and Privacy Preserving Prompt Tuning in Federated Learning

25 August 2022
Haodong Zhao
Wei Du
Fang Li
Peixuan Li
Gongshen Liu
    FedML
ArXivPDFHTML

Papers citing "FedPrompt: Communication-Efficient and Privacy Preserving Prompt Tuning in Federated Learning"

12 / 12 papers shown
Title
FedSpaLLM: Federated Pruning of Large Language Models
FedSpaLLM: Federated Pruning of Large Language Models
Guangji Bai
Yijiang Li
Zilinghan Li
Liang Zhao
Kibaek Kim
FedML
52
3
0
20 Feb 2025
Decentralized Low-Rank Fine-Tuning of Large Language Models
Sajjad Ghiasvand
Mahnoosh Alizadeh
Ramtin Pedarsani
ALM
61
0
0
26 Jan 2025
Save It All: Enabling Full Parameter Tuning for Federated Large Language
  Models via Cycle Block Gradient Descent
Save It All: Enabling Full Parameter Tuning for Federated Large Language Models via Cycle Block Gradient Descent
Lin Wang
Zhichao Wang
Xiaoying Tang
16
1
0
17 Jun 2024
DP-DyLoRA: Fine-Tuning Transformer-Based Models On-Device under Differentially Private Federated Learning using Dynamic Low-Rank Adaptation
DP-DyLoRA: Fine-Tuning Transformer-Based Models On-Device under Differentially Private Federated Learning using Dynamic Low-Rank Adaptation
Jie Xu
Karthikeyan P. Saravanan
Rogier van Dalen
Haaris Mehmood
David Tuckey
Mete Ozay
47
5
0
10 May 2024
Federated Learning Priorities Under the European Union Artificial
  Intelligence Act
Federated Learning Priorities Under the European Union Artificial Intelligence Act
Herbert Woisetschläger
Alexander Erben
Bill Marino
Shiqiang Wang
Nicholas D. Lane
R. Mayer
Hans-Arno Jacobsen
8
15
0
05 Feb 2024
Towards Urban General Intelligence: A Review and Outlook of Urban Foundation Models
Towards Urban General Intelligence: A Review and Outlook of Urban Foundation Models
Weijiao Zhang
Jindong Han
Zhao Xu
Hang Ni
Hao Liu
Hui Xiong
Hui Xiong
AI4CE
77
14
0
30 Jan 2024
Sweeping Heterogeneity with Smart MoPs: Mixture of Prompts for LLM Task Adaptation
Sweeping Heterogeneity with Smart MoPs: Mixture of Prompts for LLM Task Adaptation
Chen Dun
Mirian Hipolito Garcia
Guoqing Zheng
Ahmed Hassan Awadallah
Anastasios Kyrillidis
Robert Sim
68
6
0
04 Oct 2023
When Foundation Model Meets Federated Learning: Motivations, Challenges, and Future Directions
When Foundation Model Meets Federated Learning: Motivations, Challenges, and Future Directions
Weiming Zhuang
Chen Chen
Lingjuan Lyu
C. L. P. Chen
Yaochu Jin
Lingjuan Lyu
AIFin
AI4CE
83
84
0
27 Jun 2023
P-Tuning v2: Prompt Tuning Can Be Comparable to Fine-tuning Universally
  Across Scales and Tasks
P-Tuning v2: Prompt Tuning Can Be Comparable to Fine-tuning Universally Across Scales and Tasks
Xiao Liu
Kaixuan Ji
Yicheng Fu
Weng Lam Tam
Zhengxiao Du
Zhilin Yang
Jie Tang
VLM
228
780
0
14 Oct 2021
The Power of Scale for Parameter-Efficient Prompt Tuning
The Power of Scale for Parameter-Efficient Prompt Tuning
Brian Lester
Rami Al-Rfou
Noah Constant
VPVLM
275
3,784
0
18 Apr 2021
WARP: Word-level Adversarial ReProgramming
WARP: Word-level Adversarial ReProgramming
Karen Hambardzumyan
Hrant Khachatrian
Jonathan May
AAML
243
340
0
01 Jan 2021
FedPAQ: A Communication-Efficient Federated Learning Method with
  Periodic Averaging and Quantization
FedPAQ: A Communication-Efficient Federated Learning Method with Periodic Averaging and Quantization
Amirhossein Reisizadeh
Aryan Mokhtari
Hamed Hassani
Ali Jadbabaie
Ramtin Pedarsani
FedML
145
756
0
28 Sep 2019
1