ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2412.16504
  4. Cited By
Privacy in Fine-tuning Large Language Models: Attacks, Defenses, and Future Directions

Privacy in Fine-tuning Large Language Models: Attacks, Defenses, and Future Directions

21 December 2024
Hao Du
Shang Liu
Lele Zheng
Yang Cao
Atsuyoshi Nakamura
Lei Chen
    AAML
ArXivPDFHTML

Papers citing "Privacy in Fine-tuning Large Language Models: Attacks, Defenses, and Future Directions"

2 / 2 papers shown
Title
What Large Language Models Do Not Talk About: An Empirical Study of Moderation and Censorship Practices
What Large Language Models Do Not Talk About: An Empirical Study of Moderation and Censorship Practices
Sander Noels
Guillaume Bied
Maarten Buyl
Alexander Rogiers
Yousra Fettach
Jefrey Lijffijt
Tijl De Bie
21
0
0
04 Apr 2025
VeriLeaky: Navigating IP Protection vs Utility in Fine-Tuning for LLM-Driven Verilog Coding
VeriLeaky: Navigating IP Protection vs Utility in Fine-Tuning for LLM-Driven Verilog Coding
Zeng Wang
Minghao Shao
M. Nabeel
P. Roy
Likhitha Mankali
Jitendra Bhandari
Ramesh Karri
Ozgur Sinanoglu
Muhammad Shafique
J. Knechtel
54
0
0
17 Mar 2025
1