ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2312.15603
  4. Cited By
A Split-and-Privatize Framework for Large Language Model Fine-Tuning

A Split-and-Privatize Framework for Large Language Model Fine-Tuning

25 December 2023
Xicong Shen
Yang Liu
Huiqi Liu
Jue Hong
Bing Duan
Zirui Huang
Yunlong Mao
Ye Wu
Di Wu
ArXivPDFHTML

Papers citing "A Split-and-Privatize Framework for Large Language Model Fine-Tuning"

2 / 2 papers shown
Title
The Power of Scale for Parameter-Efficient Prompt Tuning
The Power of Scale for Parameter-Efficient Prompt Tuning
Brian Lester
Rami Al-Rfou
Noah Constant
VPVLM
254
2,999
0
18 Apr 2021
Extracting Training Data from Large Language Models
Extracting Training Data from Large Language Models
Nicholas Carlini
Florian Tramèr
Eric Wallace
Matthew Jagielski
Ariel Herbert-Voss
...
Tom B. Brown
D. Song
Ulfar Erlingsson
Alina Oprea
Colin Raffel
MLAU
SILM
243
1,386
0
14 Dec 2020
1