ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2311.17431
  4. Cited By
Grounding Foundation Models through Federated Transfer Learning: A
  General Framework

Grounding Foundation Models through Federated Transfer Learning: A General Framework

29 November 2023
Yan Kang
Tao Fan
Hanlin Gu
Xiaojin Zhang
Lixin Fan
Qiang Yang
    AI4CE
ArXivPDFHTML

Papers citing "Grounding Foundation Models through Federated Transfer Learning: A General Framework"

13 / 13 papers shown
Title
A Split-and-Privatize Framework for Large Language Model Fine-Tuning
A Split-and-Privatize Framework for Large Language Model Fine-Tuning
Xicong Shen
Yang Liu
Huiqi Liu
Jue Hong
Bing Duan
Zirui Huang
Yunlong Mao
Ye Wu
Di Wu
29
4
0
25 Dec 2023
Distilling Step-by-Step! Outperforming Larger Language Models with Less
  Training Data and Smaller Model Sizes
Distilling Step-by-Step! Outperforming Larger Language Models with Less Training Data and Smaller Model Sizes
Lokesh Nagalapatti
Chun-Liang Li
Chih-Kuan Yeh
Hootan Nakhost
Yasuhisa Fujii
Alexander Ratner
Ranjay Krishna
Chen-Yu Lee
Tomas Pfister
ALM
156
283
0
03 May 2023
Optimizing Privacy, Utility and Efficiency in Constrained
  Multi-Objective Federated Learning
Optimizing Privacy, Utility and Efficiency in Constrained Multi-Objective Federated Learning
Yan Kang
Hanlin Gu
Xingxing Tang
Yuanqin He
Yuzhu Zhang
Jinnan He
Yuxing Han
Lixin Fan
Kai Chen
Qiang Yang
FedML
37
12
0
29 Apr 2023
LaMini-LM: A Diverse Herd of Distilled Models from Large-Scale
  Instructions
LaMini-LM: A Diverse Herd of Distilled Models from Large-Scale Instructions
Minghao Wu
Abdul Waheed
Chiyu Zhang
Muhammad Abdul-Mageed
Alham Fikri Aji
ALM
76
96
0
27 Apr 2023
BlackVIP: Black-Box Visual Prompting for Robust Transfer Learning
BlackVIP: Black-Box Visual Prompting for Robust Transfer Learning
Changdae Oh
Hyeji Hwang
Hee-young Lee
Yongtaek Lim
Geunyoung Jung
Jiyoung Jung
Hosik Choi
Kyungwoo Song
VLM
VPVLM
50
49
0
26 Mar 2023
BBTv2: Towards a Gradient-Free Future with Large Language Models
BBTv2: Towards a Gradient-Free Future with Large Language Models
Tianxiang Sun
Zhengfu He
Hong Qian
Yunhua Zhou
Xuanjing Huang
Xipeng Qiu
63
42
0
23 May 2022
Training language models to follow instructions with human feedback
Training language models to follow instructions with human feedback
Long Ouyang
Jeff Wu
Xu Jiang
Diogo Almeida
Carroll L. Wainwright
...
Amanda Askell
Peter Welinder
Paul Christiano
Jan Leike
Ryan J. Lowe
OSLM
ALM
267
8,441
0
04 Mar 2022
Decepticons: Corrupted Transformers Breach Privacy in Federated Learning
  for Language Models
Decepticons: Corrupted Transformers Breach Privacy in Federated Learning for Language Models
Liam H. Fowl
Jonas Geiping
Steven Reich
Yuxin Wen
Wojtek Czaja
Micah Goldblum
Tom Goldstein
FedML
43
37
0
29 Jan 2022
Differentially Private Fine-tuning of Language Models
Differentially Private Fine-tuning of Language Models
Da Yu
Saurabh Naik
A. Backurs
Sivakanth Gopi
Huseyin A. Inan
...
Y. Lee
Andre Manoel
Lukas Wutschitz
Sergey Yekhanin
Huishuai Zhang
107
258
0
13 Oct 2021
The Power of Scale for Parameter-Efficient Prompt Tuning
The Power of Scale for Parameter-Efficient Prompt Tuning
Brian Lester
Rami Al-Rfou
Noah Constant
VPVLM
252
2,999
0
18 Apr 2021
Practical and Private (Deep) Learning without Sampling or Shuffling
Practical and Private (Deep) Learning without Sampling or Shuffling
Peter Kairouz
Brendan McMahan
Shuang Song
Om Thakkar
Abhradeep Thakurta
Zheng Xu
FedML
135
154
0
26 Feb 2021
Extracting Training Data from Large Language Models
Extracting Training Data from Large Language Models
Nicholas Carlini
Florian Tramèr
Eric Wallace
Matthew Jagielski
Ariel Herbert-Voss
...
Tom B. Brown
D. Song
Ulfar Erlingsson
Alina Oprea
Colin Raffel
MLAU
SILM
241
1,386
0
14 Dec 2020
BERT-of-Theseus: Compressing BERT by Progressive Module Replacing
BERT-of-Theseus: Compressing BERT by Progressive Module Replacing
Canwen Xu
Wangchunshu Zhou
Tao Ge
Furu Wei
Ming Zhou
175
184
0
07 Feb 2020
1